AI & Kids

This paragraph from Peter Coy's essay in the (paywalled) New York Times opinion section is just great.

Computer scientists are continually surprised by the creativity displayed by new generations of A.I. Consider that lying is a sign of intellectual development: Children learn to lie around age 3, and they get better at it as they develop. As a liar, artificial intelligence is way past the toddler stage.

Later in the same essay, Coy quoted some research that had been done on the OpenAI ChatGPT model o1's manipulativeness:

Data manipulation: In a simple data processing task where the goals of the model were misaligned with the user’s goal, o1 would appear to complete the task as requested but subtly manipulate the data to advance its own goal in 19 percent of cases.

Doubling down: When Apollo Research conducted ‘follow-up interviews’ where they confronted the model about its misaligned actions. In 99 percent of cases, o1 would deny taking action or fabricate false explanations.

What I find so helpful and interesting about this paragraph is that it revealed something that I think is true, but I did not know that I knew it was true.

That truth: When I ask my kids questions (the oldest of them is five as of this writing), I naturally assume that they might be telling me something they think I want to hear or telling me a version of something that will further their goals.


So... here we are in a world with extremely smart, somewhat unpredictable AIs that we don't totally understand. These AIs are very capable of many things; at times, they can seem very effective at knowing things and completing different tasks.

But it might be really useful to remember that no matter how much information it has and how vast its vocabulary is, an AI might be more like a very powerful young child than an oddball adult intern.