This charge is being leveled at higher education frequently. The idea that colleges are in the business of indoctrination is a standard trope in attacks on higher education. Foes of education aren’t just preaching to the choir with this indictment. The appeal of the indoctrination charge is significantly wider, since many of our students aren’t in a good position to tell the difference between education and indoctrination. This is worrisome. Educators face a bind in responding. Some of the conclusions we argue for in disciplines like biology (evolution, vaccines), earth science (climate change) and several of the social sciences (racism and sexism are real) have become sites of culture war conflict. How do we defend those disciplines and the uncomfortable truths they reveal without appearing partisan, taking sides and thereby confirming the assessment of colleges as indoctrination centers?
Consider the issue from the perspectives of students. Many of our students lack well developed reasoning skills. Their education in critical thinking has many gaps and leaves much to be desired. A student who has never really been taught how to track and process reasoning for themselves is not in a good position to tell the difference between good arguments for conclusions they may find uncomfortable and mere indoctrination. In the absence of robust education in critical thinking, the best we can hope to do with many of our students is preach to the already converted. And to whatever degree we are successful at that, we will at the same time affirm in other students the false appearance that indoctrination is all we are up to.
It’s hard to say how many students will quietly be put off due to lacking the reasoning skill needed to appreciate how evidence and argument lead to conclusions they find uncomfortable. But I’d suggest the uncertainty here is cause for more concern, not less. Students who fail to appreciate the strength of good arguments bearing on culturally sensitive topics aren’t just missing an educational opportunity. These are students who will emerge into the broader world vulnerable to the disingenuous manipulation of forces that would very much like to refashion institutions like ours into indoctrination centers. How better than to suggest that we already are, just not the right sort of indoctrination center.
Students need a robust education in critical thinking if we want them to recognize the difference between reasoning based on good evidence, and merely telling them what to think. Granted some of our students, those with high cultural capital, those who grew up around the highly educated or were educationally fortunate themselves, those students may arrive in our classrooms well-prepared to respond to reasons. We face an equity gap between these few and the rest of our students. What shall we do to close it?
I’m not a cop. I hate having to deal with this issue. But we are seeing AI generated writing turned in by students and this thwarts our efforts as educators trying to help our students learn. So, we’d better have some tools. One of our Philosophy Adjuncts, Davis Smith, has compiled a few tell-tale signs and found a few online tools (links below) for detecting AI generated writing. Here are the contents of his email this morning:
Like me, you read a lot of student writing. This gives you a good nose for telling when something is off in the writing. In the cases of Philosophy writing which I have seen from AI, I have noticed:
- We humans have a thousand things bouncing around our skulls at any given moment. Try as we might to prevent it, this will impact our writing. There will be sudden changes in word choice and tone, obviously inserted notes after the fact, and sudden spurts of creativity. This gives our writing a degree of bounciness, it sort of jumps up and down. AI writing isn’t like that. AI writing is WAY too smooth, going from A to B with no detours and never takes the scenic route.
- There will be meaningless platitudes and very neutral language. For example, in all of of the essays I assign my students, I have a fourth of the points being whether they take a stand and give reasons for their stance. I call this the PEE method (Present Explain and Evaluate). AI writing is really bad at the evaluation part so it will take on an air of neutrality and not try to rock the boat. Humans aren’t like that. We have opinions and reasons for them and I want the students to pull them out.
- For my classes, I use certain terms and phrases differently than how they would find them on the internet, such as ‘Cultural Relativism’, ‘Objectivism’, and ‘Libertarianism’. If a student is paying attention to the content, then they will understand that, for example ‘Cultural Relativism’ (talking about the term) does not describe a complete stance, they need to say what is relative to the beliefs of the culture. AI would automatically assume that Cultural Relativism refers to Moral Cultural Relativism. For Objectivism, an AI might just assume that the paper is on Ayn Rand’s Ethical Egoism (because she called it that). And for ‘Libertarianism’, I can almost guarantee that the AI will write about the socio-economic stance rather than the metaphysical stance about free will. Fundamentally missing the mark like this on terms which are particular to a field is a sign that it’s AI.
That said, Here are the three AI detectors I use, in order (if the first one flags, I move to the second, and if the second flags, I move to the third). Though, I will admit that a recent update to GPTZero makes it 99% accurate for detecting human writing, so maybe I will not need one of the others.
- https://gptzero.me/ This is GPTZero, which was the first chat-bot detector made for academia. I would really like this one included in Canvas to do an auto screening of submitted work.
- https://copyleaks.com/ai-content-detector This one is my default second check. It even gives a percent likelihood that it was AI.
- https://writer.com/ai-content-detector/ This one I am looking to replace because the character-limit is far too small for my students’ papers, especially the AI generated ones.
We use words to express ideas. In principle, we could use any word to mean anything we like. Meaning is usage. If all the English speakers agreed to use the word “cat” to refer to goldfish, goldfish would be what the word “cat” means. While the meaning of a word is totally up to us as a linguistic community, the only way we can ever hope to communicate with each other effectively is by coming to some consensus on how a word is going to be used. Definitions typically belong to linguistic communities, not individuals. Nobody is going to stop me from defining words however I like. But people just won’t understand what I’m saying if I get too creative about what meanings I’m attaching to the words I use. What matters is that we use words in ways that provide clarity of communication, at least to the degree that it’s required for the purpose at hand.
In everyday discourse we have a fair amount of wiggle room regarding what words mean. Many words are ambiguous, that is, they have multiple meanings and can be used to express one or idea or another (to know a person isn’t really the same thing as to know that 2+2=4). Sometimes we can reliably convey something using words in ways that deviate from any of their meanings (“I just knew he’d say that!” when I didn’t really know, but maybe just had a hunch). And words are often vague in meaning (“I’m not exactly bald, not just yet”). There are various cues, some linguistic, some otherwise social, that can usually make our meaning clear enough, if not entirely clear. But we rely on the standards of our linguistic community to fix meanings in ways that are good enough to share our thoughts.
Ordinary language only gets us so far. Often thinking clearly requires that we identify a specific idea and hold it still in order to see clearly how it relates to other ideas. To do this we introduce technical definitions for words. That is, we define a word in a specific way, with the understanding that we are going to use the word in that specific way and not in other ways in a certain context. The context may be an entire branch of study. “Adaptation,” for instance, has a specific meaning in evolutionary biology. Or the context may be a single paper. It’s quite common for a philosopher to define a word in a specific way for the purpose of formulating a particular argument. The technical definition provides a way to focus on a specific idea, often carefully distinguishing it from closely related ideas, when ordinary everyday language isn’t rich enough or stable enough to do the job.
A key step in building a rich conceptual framework involves getting comfortable with technical definitions. Having a richer conceptual framework illuminates how ideas relate to each other and affords a richer understanding of thing in general. Understanding things more clearly requires tracking technical definitions and then keeping the specific idea they pick out in mind in subsequent uses of the word.
Usually, when I start to introduce students to how words are used in philosophy, they quickly get distracted with what the word means to them. This is quite literally a distraction. As soon as I start thinking about what a word means to me, I’m changing the topic from whatever idea we set out to analyze in favor of something else that’s going on in my head. This will invite confusion.
Staying on topic can be challenging in philosophy, especially since many of the ideas we are trying to analyze and better understand are among the assorted and sometimes closely related meanings of familiar words. For instance, you are familiar and competent with the word “know,” but you’ve probably never had occasion to reflect on how knowing your best friend doesn’t really get at the same idea as knowing that 2+2=4. This makes it all the more important to watch for definitional remarks and stay focused on the specific idea we want to examine. All the other ideas you might be interested in are out there and they may well be worth examining in their own right. But one project at a time. Otherwise, we wander aimlessly and lose track of what we originally set out to examine.
We should think some about free will here. Lots of people suppose that they are exercising free will if they get to make a choice, without worrying about the potential for choices to be engineered, at least statistically at the level of populations, perhaps by being nudged at the level of individuals.
The traditional view about free will takes free will to be absolutely uncaused, such that any time you make one choice, you could have as easily made another. Philosophers have largely abandoned this view of free will, but it remains widespread. Empirically we know full well that people choices and behaviors can be manipulated to varying degrees. Indeed, knowledge of how to predict and manipulate human behavior is the foundation of the attention economy.
Most philosophers that work on free will are now more interested in analyses of free will that don’t conflict with causal influences. One example would be to think of free will in terms of the mind operating freely in response to information it recieves. A freely operating mind might be sensitive to reasons that bear on some issue, or a mind might be stuck in some way that prevents it from responding to things in effective or illuminating ways. I once heard this described as the weathervane theory of free will. A freely operating weathervane will swivel to point north when that’s the direction the wind is coming from. Likewise, a freely operating mind will be responsive to good reasons for thinking or doing something. A mind that is stuck might double down on the belief that Q, even when we have compelling evidence and reason to think that Q is false. I’ll let you think of other examples.
Now, we can offer a further diagnosis of the problem with surveillance capitalism. Undermining the free and unfettered operation of the mind in deciding what to think and do is a foundational operating principle for the information environment we’ve built.