Files
journal/Drafts/AI - Barnes, Kempf.md
Thaddeus Hughes e033e88b14 move
2025-10-10 07:36:45 -05:00

85 lines
12 KiB
Markdown

Talking about new and emerging technology is very annoying. It is, well, new and emerging. What form will it take on? So, when we talk about "AI", I grumble - because that could be any number of models. So, let's hone in on Large Language Models (LLMs). Well, this could take any number of embodiments. Let's hone in on chatbots.
Marc Barnes put out a great article against chatbots. Actually, I haven't read it yet. I listened to the podcast. I'm assuming it's the same argument though. Marc's central argument is this: chatbots elicit conversation. Conversation is for communion. Communion can only be had between two real intellects/persons. The chatbot is not a real intellect/person. The ends are frustrated. Thus, the chatbot is immoral.
Marc, the general line is compelling - but there are a few bad links in the chain of reasoning. And I say - this poses an opportunity. I do suspect that if we barrel down the current path, Barnes is right. But if we ride the edge of the wave just right - there is a wonderful opportunity we are presented with.
### Masturbation
One can immediately see how Marc's argument aligns with physical intimacy, with sex - something else that is obviously aimed at communion. In form, it obviously looks much like pornography and masturbation.
The responses of the bot are "pornographic" - they are derived from stereotype of the world, and at that, are curated, and amplified. Fantasies emerge.
The reception of that information is likely masturbatory. It draws us inwards, away from conversation with others. We are so clever. We have the best information presented to us. We have no need for sex, er, I mean, no need for conversation with another person.
The trouble, though, is that I'm not sure that people are the only things we have conversations with. When I pick up a tool and begin to work a piece of material with it, it isn't a linear process. The material talks back to me. As I sink my chisel in, new grain is revealed, and I may have to alter course. I learn more. I do have a communion with the material. It is, of course, a lesser communion than I would have with a human, but it is a communion - my human soul becoming closer to this inanimate soul.
The idea of typing something in, pressing enter, and receiving text output, is how computers have pretty much always operated up until GUIs became dominant. Of course, command-lines or shells require their own proper syntax, which is precise, and obtuse to the beginner. You would never tell your friend "grep -ls ..."... You would tell them to find... . It remains clear one is commandeering a machine, not speaking to a person with a will.
### John Kempf
John Kempf is an Amishman. He runs a fairly large consulting company, Advancing Eco Agriculture, and has a fantastic podcast. Not things you would expect one of the Amish to do.
Marc Barnes notes that the Amish serve as a sign to us English - that you can in fact choose what technologies to use as a society. We are not resigned to go the way of the world. No technology is inevitable. We can steer the ship.
The Amish aren't luddites. You'll see some odd decisions if you drive through - like a house that has solar panels and a diesel generator, but not connected to the grid. You may see a woodshop that has no electricity - but runs entirely on a diesel air compressor. The oddest thing I've seen is forklifts modified so that they can stand on them (but not sit). Maybe the rules make sense, maybe not - but the point is, they have made decisions as a people that aren't merely whatever the almighty dollar seems to suggest.
So, it might surprise many to learn that this Amishman, John Kempf, is leading the development of an AI chatbot. It's called Field Lark.
What problem is Kempf trying to solve? I think, actually, a very good one. And I think there's a lot of merit to the way he's going about it.
Kempf is trying to take the knowledge that has been written in manifold texts about plant behavior and nutrition, and bring it forward at a rapid pace. These are tremendously multivariate, nonlinear processes. His ultimate aim is to treat diseases and pests at the *root cause* and create food that is "of such an exceptional quality we can begin to have a real conversation about food as medicine". That's a tricky thing to do, because the answers do not readily present themselves.
So, how's AI going to help us?
To speak loosely, we have two modes of thinking (or maybe, these are two ends of a spectrum). We can think logically - where we use hard rules - or we can think intuitively - by "magic" or "association".
Computers are basically really good at doing lots of this "logical" thinking very quickly with a ton of inputs. We aren't good at that, and that's why we intuit when it comes time to solve hard problems. Think of an LLM as "glue" that can create tokens and set up the stuff by which these logical processes can be done. It's also used again as part of the "interface" between computer and human.
Kempf's aim is to use this sort of simulated intuition to sharpen our intuitions. I don't think he thinks that it'll work because the AI is smarter. Rather, if it works, it'll be like a stone sharpening a knife. It will ask provoking questions.
He's trying to crack a hard nut. Lots of people have learned lots of things and written it down. But biological processes are just too hard to understand; they have an inexhaustible comprehensibility.
But Kempf certainly doesn't seem to think the goal is to make an agronomist replacement. Such an idea makes about as much sense as saying that spreadsheets will replace accountants. No, spreadsheets are where accountants go to think. I can design mechanical systems with paper and pen, but boy, I can think about things a lot better with a CAD system. I go to CAD to think.
Really, the way the Kempf described it sounded a lot like how I use CAD. I give my constraints to a sketch. The geometry engine "solves" it. I look at the result. I make changes accordingly. This is a conversation - a back-and-forth, just like the back-and-forth I might have with a piece of wood that I'd carve.
How's it working? Kempf relates one story from one of his growers: this grower chatted with Field Lark for a while. Then, he talked with an AEA consultant. He felt that the consultant was able to answer his questions to a much higher degree than Field Lark did. But, he also felt that he was able to ask much more competent questions, and thus get much better answers, than if he has not used Field Lark to begin with. This is pretty much exactly how mechanical design with CAD goes. I have certain ideas, I think about it in my mind, but then I put everything out on the computer, and can visualize the result. Then, I can have a much better conversation with my client who I'm doing the design work for. There is an enhancement in the quality of conversation with another person, and productivity. *There is also a decrease in the quantity of conversation with another person, it must be admitted.*
Kempf was conscious of some of the problems inherent to chatbots - and set out deliberately to make something that doesn't present itself as a person, persona, or anything like that. I don't really know what it is. But when I listened to Marc talk, I said "that's it, I'm changing Grok settings" - I gave it the prompt to never act like a person, never use the words "I, me, we,", that its output should read like an encyclopedia.
I think that's the thing. Interaction with anything is always a conversation, and it leads to communion. But we need to bear in mind the sort of thing we are coming into communion with. When we communicate with a chatbot, we are not communicating with a person. It's more like an encyclopedia. Or more realistically, a spreadsheet. Or a CAD sketch.
It must be noted: Kempf's approach, to even his entire company has been "test, don't guess". He is very much a proponent of science.
# Appropriate Design
Technology can do a lot of things. We have certain types of motivations when we develop it. Here are some that I think are generally noble, as they are likely motivated by love:
- to carry out a task more quickly, so that other things can be done
- To carry out a task with higher quality
- to carry out a task with greater efficiency/less ill consequences
- to carry out a task that was not possible before
Here are some that I think are suspect, as they may be motivated by vice:
- to have a task be performed with less effort, or even no effort
This seems nitpicky. Isn't quickly or more efficiently, less effort? There's a difference. It's the difference between the workman who drags his feet and just wants the job to be over by any means, and the workman who is spry and loves the job. The former is vicious, and will do shoddy work. The latter is virtuous, and will do good work. If we design technology right, the work actually becomes faster, of higher quality, and of higher pleasure to the workman. This is what we are after, not just mere labor-eradication.
Let's turn our attention to computers. I think we have to consider computer systems in general, not just AIs. They enable us to do a plethora of things, of course. But I really think the thing they do best is serve as a supercharged desk - a supercharged drafting table. They present us with information which we can observe, manipulate, and move.
Many people think that what we need is automation of processes but this is, I think, really orthogonal to the important axis of discussion. The question is, does the system clarify, or obfuscate, reality? Automated systems do tend towards obfuscation - by necessity often they hide things from us. However, sometimes hiding things is necessary for clarification - the man who wishes to look down a dark hole in the middle of a field must hide the sun. Computers allow us to perform this sort of hiding and clarification automatically. When it goes well, they are a joy to use. We get frustrated with them not necessarily when they show or hide too much or little, but when they obfuscate the needful information whether by hiding it or by drowning it in a sea of other junk.
We have to bear in mind the data in question is **about** reality. It is not reality itself, but it pertains to reality, or at least ought to, just as speech ought to.
So the computer then is to serve as a means to communion with reality, just like books, microscopes, ledgers of record, and so forth. That is to say, they help us understand and shape reality. They help - and perhaps greatly so - but they only help.
Everything I have said here applies as much to conventional computing such as spreadsheets as it does to an AI system.
Troubles arise when we shift responsibilities that are clearly ours to machines - when we delegate our will. We become deceived perhaps first when we think that the machine has a will. But it does not. It's a supercharged desk. It does what we ask it to do. If we invert that relationship, we will debate ourselves, behaving as animals, who are our subjects, not the other way around.
But the computer is good when it serves as this desk on steroids - as expanded mental memory. It does augment and change us though. We have to be honest about this. But every technology does this - even the most primitive or tools shapes us in turn. We cannot figure out if a technology is good or bad by determining how close it keeps us to a Rousseauean "state of nature". We can ask though, if it brings us in closer alignment with our nature. As a glorified desk? Yes - the computer affords us the ability to excercise our will, and devote our working memory not so much to the
"Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them."
- Frank Herbert, Dune