Instead of asking whether AI will disrupt education or create even more educational inequalities (probably both, but..), it might be useful to ask what aspects of education can be improved using AI, i.e., aspects of learning that are impossible or prohibitively expensive today but will be made more accessible with an AI tutor.
Result of a small experiment below.
Most education right now is content based, i.e., students are taught math and history and other subjects; the purpose of the classroom - at least when it works - is to teach students facts about the world and the theories that explain those facts. Math isn’t entirely taught that way: we also learn how to solve problems. Language instruction can also be practice oriented: how to write creatively, for example.
On the whole though, know what is more important than know how.
Contrast that with coaching on a soccer field. Players are asked to practice a range of moves again and again until it becomes muscle memory. A good coach will also give an errant player precise feedback on what went wrong and how to improve that play.
Real time feedback while solving math problems is hard and expensive. Technologists have thought about disrupting this market for a long time and have tried to incorporate automated feedback and graded instruction into their programs. However, these programs are boring as hell and don’t expose learners to the beauties of math, writing or history.
Can AI solve both problems at once? Can it help us train critical thinkers who have broad knowledge of the world and can express that knowledge succinctly?
I tried a tiny experiment to see if something like this is even possible. My goal:
Get me into a flow state where I ask questions and use those to explore a topic and learn how to think as much as what to think about.
It’s easy to switch to a different register
We can bring out the emotional aspects of this conversation
More empathetic than most of my teachers growing up. But all of this is still content driven. What if we want to figure out how to think about astronomy, not what to think about.
Not bad, though the disclaimer at the beginning is unnecessary. We can probe it further
Explain the plausibility of the heliocentric model to the five year old me
And we can expand our inquiry to models from other fields
End of experiment. I will give ChatGPT 4/10.
The Good: Can dial sophistication up and down and explain at the level of the learner. Create links to examples and similar theories on the fly. Rapid back and forth makes it easier to achieve a flow state.
The Bad: pretty boring. Too much knowing what and not enough knowing how. Not enough of a mind trainer.
The Ugly: No way to mark up the answers. I should be able to see the geocentric and heliocentric models in action as annotations.
However, none of these are deal-breakers in the long term IMHO. Far more important than improving the technical capacities of ChatGPT is ensuring that the knowledge so created is a public good. I mean that in a technical sense:
Open source is neither necessary nor sufficient when it comes to AI (I will explain why in future essays, but the short version is that it’s too libertarian and property focused). We need something like ‘common source,’ just as the earth itself is the ultimate commons.
What might be a first step towards creating a knowledge commons?
We have to create a canonical resource that serves as a test bed for common-source, AI driven knowledge creation and learning. And therefore:
What would Wikipedia look like if it was a post Chat-GPT resource?
What would the current Wikipedia look like if we accessed it through a chat like API?
How would you redesign a global knowledge resource from scratch if that was your intention.
I will talk about this question in future essays.