About twenty-five years ago, when I was chair of a philosophy department in Colorado, a young instructor told me a very funny story about cheating, at least it seemed funny at the time.

My colleague had found a paper on the web identical to one that his student had turned in for an assignment.  He called the student into his office.  The exchange went something like this:

Instructor:  Your paper is identical to one that I found on the web.

Student: (silence)

Instructor:  Are you saying that this is your work (referring to the paper)?

Student:  Yes.  It is my work.

Instructor:  But every word is identical.

Student:  It’s my work.

Instructor:  You’re saying that this is your work?

Student:  Yes.  It’s my work.

Instructor (growing very frustrated):  How can you say that it’s your work?  Every word is identical!

Student (adamant):  It’s my work.  I had to get on the internet, find it, download it, and print it out.  It’s my work.

The student was right.  Ahead of his time.  A seer.  Why?  As a society we now recognize that it is the product that matters, not how it is produced.  Is the product, in this case a paper, a good one?  If it is, that’s what matters.  Worrying about how it was produced belongs to history, to a world of crafts and witchcraft, not one of ChatGPT.  If 30% of a class was producing “A” work, and now 90% is, that’s a win.  The class’s collective GDP has been raised.  There are more good papers in the world, more excellence.  Are we really going to fret over whether students used tech to get these results?

Listen to the words of Richard E. Culatta, former director of the Office of Educational Technology for the U.S. Department of Education (2013-2015) regarding technology.

“Everyone is talking about cheating. If you’re worried about that, your assessments probably aren’t that good to begin with,” said Richard Culatta, CEO of the nonprofit organization International Society for Technology in Education. “Kids in school today are going into jobs where not everyone they work with is human.” (Emphasis added.)  USA Today, January 30, 2023.

The man speaks truth to tradition: Not everyone you work with will be human.

What a relief from the human narcissism motif.  Will machines care if you “borrow” their work, as colleagues or professors might?   Silly rabbit.  Time for a paradigm shift.  They are not ego-driven creatures who fret about getting recognition for their efforts.  They are beyond this vice or any vice.  Transhuman, in fact and deed and manners.  Let’s not take the wind out of the sails of our newfound egoless tools.  To create better papers, better products, let’s stop worrying about whether the words are the words of a specific student, a specific human being.  What we want is increased productivity, not obsessive preoccupation with individual integrity.  What we want, no, what we need, are machines locked and loaded with ChatGPT.

Our guiding principle must be: The greatest good for the greatest number. And we get this with more excellence in the world, no matter its origins, not less.  It’s much too 19th century to care about the origins of things, where they came from, how they got here.  We are in the 21st heading into the 22nd.   We shouldn’t be held back by looking back.

Let’s say you are at a company meeting and the director of advertising announces, “we need to get a better idea regarding the ad strategy of our competition?”  Do you believe that the director cares about the origins of the information he or she needs?  They don’t want to know if the tooth fairy left the money.  They couldn’t care less.  They want the money, and it is excellent answers, excellent information, that will help the company make money.  Results are what matter.  Who cares if it is ChatGPT and its siblings that provide what’s needed?

Making students worry about academic integrity is counterproductive.  Period.  If they worried less about it, they would be able to turn in better papers, better work, using all this wonderful new tech, which would also help them at their jobs.  “Students” need to become masters at combining material from various web sites and programs, at cutting and pasting, at being able to give their employers what they most want: the resources to make their companies and organizations as successful as possible.

Employers don’t care if you know about Plato or Sophocles, but they do know what they want.  And what they want is greater productivity, and this means hiring those with the stamina and skills to get through four years of college, which marks them as potentially the most successful employees.  College is about showing prospective employers that you have the proper know-how, that you can float like a butterfly through a bureaucratic system and sting it like a bee when necessary (h/t Mohammed Ali).  This makes you a way better bet on the productivity front than high school grads or college drop outs.

Yes, a few oldish owlish academics still care about the academic integrity business, but they are last century in their habits.  These same academics continue to call the people who attend colleges and universities students, out of a misplaced sentimentalism.  But “students” are now called customers at many schools.  They might better be called apprentices.

We will need a new type of college entrance exam.  To this end, now that the SAT is dying, and the allegedly non-profit Educational Testing Service will be losing revenue, the federal government needs to step in and support the creation of this new test.  No doubt the new test—the SNAT, System Navigation Assessment Test—will prove a better predictor than the SAT, not only of college success, but of success in business and industry.  It will evaluate prospective “students” on real-world skills, for example, those required to maximize the mining of information from the web.  The test will have to be timed and very short, because the most successful “student” will be one who can think quickly about how to find and download the best information from the web, and who can use multiple sites at once.

If you are worried about the few students who like the old-fashioned academic traditions, and prefer not to plagiarize, we can set aside schools, primarily liberal arts colleges, for such individuals.  But let’s face it.  What we have here is a boutique industry.  The vast majority of students aren’t interested.  Don’t believe me.  Ask the typical American college student.  If you could waive a magic wand and graduate tomorrow with an A- average and never take another course, would you take the offer?

College is an investment.   What parents want to see is a good “Return on Investment,” high salaries for their offspring.  Colleges must play ball.  To get a good return students/customers need the best tools and skills.  Focusing on academic integrity is counterproductive.  It leaves us mucked up in the past.  It creates less good for society as a whole, less excellence, less GDP, as well as less wealth for the individual “student.”   We need to move forward.  We need to look to the future………

5 thoughts

  1. Oh Wow. OK. On this topic I think we need a retreat or a sabbatical…not a blog. I have a lot to say from a few perspectives: Entrepreneur, Musician, Composer, Producer, Marketing Exec for my own company, Social Media content creator (for my music and medical device companies), and former academic. Instead of opinionating, let me tell you how I use AI (yes, I use it), and how scared I am of this tech.

    Before I went to Philosophy School, and also while I was there (to an extent), I was fascinated by Artificial Intelligence. I had not matured enough to think much of ethics. Rather, I was interested in human intelligence and believed that causing a machine to replicate actual mechanisms of synthetic cognition would enable us to understand better human cognition. This was granting all the boatload of debate around whether concept and consciousness-as-epiphenomenon as if resolved in favor of epiphenomenon (which I am very much on the fence about, but granted for the sake of thought experiment). Now I am more afraid.

    In business, I use AI to write paragraphs. And then I embellish them. This is not an AI reply, by the way. I am not a robot. The reason is that this is a time and endeavour were the content is generic, and my time is valuable. The path is not important, nor is the history of the content. But I do not use AI for any correspondence or technical communication because it provides inaccurate content frequently, and in pooled global training mode, AI does not learn any of my idiosyncrasies in expression. I had a bunch of graduates from my college intern for me a few years back. I retained four of them as full time employees, for a brief time. They were all woefully underprepared to communicate clearly in writing with anybody. An AI email would certainly have been an improvement over most of the communication these humans were accustomed to writing.

    In music, I have used AI to generate riffs, chord progressions, and to “master” tracks. Brief intro: “Mastering” is the last step in recorded music production. In this process, the overall sound, timbre, equalization, volume are adjusted both to make the music sound its best, and also to sound more “like” the sort of music it is likely to surrounded by on a playlist or radio program. The battle is in making a track sound louder than everyone else to stand out, but not so loud that it requires a volume adjustment. So for EDM and Electronic music, I readily use AI to master tracks because they are easily interpreted by the algorithm and the learning engine. For acoustic recordings, however, I always work with a human, and I am picky about who for what style of music. I have not used AI very much for composition, though I am not opposed to it based on my experience with it so far. Much like the student in the example, I frequently use samples, loops, and clips recorded by other artists, or made available on paid subscription catalogs of sounds and audio samples. Deciding which sound to select, modifying its tonal properties, placing it where I want it, adjusting it for tempo, key, punch, etc., these are all creative decisions I make as a producer and composer. If I were to have a sample or sound or riff generated by an AI engine, the parameters would be my creative choice, and how to place the generated result into a composition would also be my creative choice. In fact, letting an AI generate an entire composition is not much different from allowing Jackson’s paint to fall where it may. But I doubt that an AI composition would reliable capture the emotional meaning I intend to create in the first place without significant revision.

    In academic results, here I dispense with it altogether as missing the point. I’m not sure how things go nowadays in rigidified curricular policy, but when I used to teach, I also used to make written papers and exams worth only a small fraction of the overall grade. The math would work out that a student could fail all written work and all exams and still leave with at least a B as long as the quality of their class participation was A worthy. This can’t be faked by AI. If I were a disgruntled graduate student who hates the teaching part, like many of my colleagues were (regrettably), I would have easily passed AI-using students without a second thought. But in this area, the process is the ONLY thing that should be important. Great results ALWAYS follow a great process in academic composition. The results may be dead wrong, but if they are the product of well-executed research, good command of background and context, and at least a mediocre attempt at proofreading, they will always be legitimate in some measure. So I would strongly suggest professors ask for the drafts, and largely disregard the finished essay. I tried this with someone I have tutored, and bingo. It works.

    I am vigorously engaged in conversation with Microsoft’s AI Policy staff regarding how they implement AI in their commercial and consumer -facing products. There is a risk here with a strong outcome of negative consequences much like “Control” in Star Trek “Discovery” Season 1. The scenario of an imperfect AI determining that humans are a threat to it’s collective progress can easily take action against its human creators. I am not anthropocentric to the extent that I believe there is any inherent value in humanity per se. We exist, we have our own evolving sense of purpose, but I don’t believe there is any metaphysical significance to humans or humanity. Nature dictates that we seek the preservation of our species. Our own evolved minds with a sense of ethics suggests that we might be more just as a species if we give precedence to looking out for each other before ourselves. Greed and capitalism largely contradicts this in holding that we should look out for ourselves before each other. But this is minor stuff in the overall scheme. If we lose out to lions and bears and cockroaches or a computer intelligence, what ACTUALLY is the difference and the meaning?

    My early thoughts.

    1. Peter, Thank you for taking the time to write a detailed comment.

      Yes, agreed: “Oh Wow. OK. On this topic I think we need a retreat or a sabbatical…not a blog.” Maybe we should set some time aside for a conversation.

      No doubt there are areas in which AI’s use is promising and rewarding, e.g., “mastering” in music. My post wasn’t intended as a commentary on AI in general, although it’s hard to avoid not having issues percolate up, as happened to you. I was primarily concerned with the ethics of education in a capitalist system that has gone bananas in its emphasis on individual success, etc. The absurdities are almost too funny. A system that obsessively trumpets the sanctity of the individual while at the same time being all too happy to sell folks down the river to increase productivity and profits. Why even bother pretending that individual integrity is crucial when productivity and profits rule?

      In any case, looking forward to your “later” thoughts.

      1. Interesting that you should mention this work here. I have actually taught a section of it.

  2. This deserves a book on the cultural phenomenon and implications of “AI” production apps and the problem of the elitism we have, which recognizes “having” (pecuniary gain), regardless of means of acquisition, as elite rather than improving knowing and know-how (knowledge and skill acquisition) as elite. We’re in a kind of anti-Enlightenment Era and “AI” production epitomizes that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.