Thanks, Terri! I have a theory on how this is all going to go, based on my experience in sales and marketing. To be successful in business, you have to have a defined market. Looking at LLMs and their defined markets (ie who they sell to), it’s becoming apparent that tech companies are selling their service to the people those services are starting to replace in the workforce. As AI replaces more workers, more people will cancel their subscriptions. As companies go bankrupt because AI replaces them (like BPOs, marketing firms, law firms, etc), the very people who would have bought subscriptions to the LLMs will no longer be able to, or want to. The few big corporations who will benefit at first from savings using AI will also go belly up because their target markets will also be out of work. And Because LLMs are owned by private corporations, their durability depends on the market. If the market collapses and Big Tech goes belly up, and they absolutely can, then AI may become a casualty of an economic crisis. So like you, I don’t see a future where AI takes over the world and kills all the people. It will file for chapter 11 long before that because it will have put all its own customers out of work. AI may be what frees us from our tech overlords—by accident.
Awesome and very interesting perspective. From your mouth to the gods' ears! Yeah, even though we think we're so smaht (sic intended), humans tend to be myopic and, especially if there's a vested interest somehow, not see potential flaws or if they do, to rationalize and explain them away somehow. This whole AI thing seems to be looming just a little bit too large and being given outsized possibility and capacity imho, which then generates outsized concern. Hopefully you are correct!
I agree. I also think that the outsized concern is part of a marketing ploy to increase visibility of the technology, thereby increasing investment and share prices, as the technology has yet to turn a profit. So really, it's all hype. The person who claims to be the "founder of AI" giving interviews on 60 Minutes and similar platforms, predicting the end of the world, gets paid millions to do so. It's a giant farce, and the joke's on us— or at least the people who invest.
So true the joke is on us. My husband, who spent years in the finance/brokerage industry, is fond of saying "if you don't know what the product is, YOU are the product." Such is the world we live in now thanks to the feudal tech overlords.
"People who do hiring will no longer care what your GPA or SAT score was or what prestigious university you went to. None of it will carry any weight because it will be assumed you did not do the work but rather let AI do it for you, and so your fancy degrees won’t be worth the paper they’re printed on."
Rather than this leading to a collapse, I see it leading to some really unfortunate roads that sustain the system. But not in a good way:
Hiring managers can often ignore the degree and do a modicum of in-person testing during interviews to assess real reading comprehension and writing ability. Or specific skills. If they do that, it further lengthens job interview processes. It used to be one interview. Maybe introductory jobs will require a series of three interviews: Two controlled-condition tests, and maybe a third where you finally talk to human beings.
Which would suck and be pointless, but it's not like we're avoiding pointless things that suck with regard to technology right now.
The bigger issue: People don't get hired because of degrees, and so people aren't as likely to be hired with higher "interview scores" no matter what hiring process is adopted. People get hired because of networking, which is largely built on the base of the school you went to. Hiring managers get word of mouth, make a decision, and then begin the application process headed toward a conclusion that's already been made.
And if that's the case for managers, and we really don't care that much about people learning in college as much as who they make friends with, then why not just go whole-hog? Let's just drop the curriculums students want to cheat at anyway and make college a series of organized teambuilding exercises.
You can only take Binge Drinking once, but you can take Sophisticated Drinking multiple times since there are lots of snooty drinks to learn about. Polite Media Discourse for Polite Company can also be taken multiple times, especially after a new season of White Lotus. Maybe branch out with some physical activities like Paintball to be well-rounded. The really tough course would be Using AI to Write, which would consist of training on AI, not writing. Obviously. And would usually be taken the same semester as Binge Drinking.
Oohhhhh....so much there to chew on. Love it. Thanks for reading and taking the time to comment!
In the estimation of many in the U.S. (and by extension in western Europe if my conversations with colleagues there are any indication), the college/university experience is already well on its way to being what you describe here. Lord knows, the "corporate" model of education has become increasingly about pumping out workers into the economy and not about creating an informed, well-rounded citizenry, but it's been on that track for 100 years.
But to your point about the worth of college being for networking....even the main student anecdote featured in the article I mention in the piece about students using ChatGPT to cheat their way through college, when asked by the interviewer if they saw university courses as such a waste of their time (hence the use of AI to get by in them), why bother to try and get into an Ivy League school anyway, which they had done several times. The answer? Because that's where you find two things: your wife and your start-up partner.
I have served on many hiring committees in higher Ed, and the system there is very different, at least in the public institutions where I've taught. They go to great lengths to create fair hiring that is not based on nepotism or cronyism. But I know you are correct for the private sector where the old adage "It's not what you know but who you know" that is the rule rather than the exception. That's probably fine in most cases, but I'm not the first to admit that when it comes to things like the physician who is going to do a surgery on me, or the engineer building my house or local bridge, that I'm gonna side with what they know. There's still gotta be room for that in this new landscape. As a friend who gave up academia to become an EMT recently told me, "At least for now, AI can't do sutures or put a cast on a broken limb."
But you're not far off in your comments about going whole hog in watering down the curriculum. You may have seen just in the past day or two that the San Francisco school board proposed "Equity Grading" for the K-12 system there whereby students with a 40% test score could get a C and 80% test score would get an A (those are generally currently grades given for 70% and 90% proficiency, respectively). They can pass with a D with as low as a 21% score. They are also, at least in Palo Alto High School, getting rid of Honors courses because they unfairly discriminate against underachievers in the eyes of some. So the diluting of education is intended to begin even earlier than your college Binge Drinking course, all in the name of equity. Imho, that type of "equity" only creates a race to the bottom, and I'm not alone. There was so much pushback, that the school district has had to rethink things: https://www.cbsnews.com/sanfrancisco/news/san-francisco-grading-for-equity-backlash-sfusd-backs-down/
AT one time I thought social media was getting out of hand (quite a few years ago!) and that it would all implode under the weight of rabid keyboard warriors who didn't care what they said to who or any consequences that may result. Obviously it hasn't imploded, and while I might suspect a similar fate for AI, neither will that, I suspect. So it certainly makes sense to incorporate it into teaching models so that if/when used, it can be done in a way that actually adds to the learning experience rather than helps the student avoid it. I agree that the intention should always be to actually be able to put one's achievements on paper into practice when the time comes. I just worry about the years of teaching and learning that get wasted by the less dedicated learners because of reliance on AI to do the work. - Kerry.
Thanks for your thoughts Kerry! I definitely share your worry in your last sentence there about all the teaching and learning. Maybe we can hope for a backlash of some sort, but I fear that would only be successful for a while. This shift is too big, so we must learn to cope, I think, and work with it. Another colleague just told me this morning they know of an instructor who is "getting in the AI game" by using it to grade assignments. Yikes! I'm not sure about that, but as I said in my piece, I think we're in the phase of grasping at straws a bit.
Thanks, Terri! I have a theory on how this is all going to go, based on my experience in sales and marketing. To be successful in business, you have to have a defined market. Looking at LLMs and their defined markets (ie who they sell to), it’s becoming apparent that tech companies are selling their service to the people those services are starting to replace in the workforce. As AI replaces more workers, more people will cancel their subscriptions. As companies go bankrupt because AI replaces them (like BPOs, marketing firms, law firms, etc), the very people who would have bought subscriptions to the LLMs will no longer be able to, or want to. The few big corporations who will benefit at first from savings using AI will also go belly up because their target markets will also be out of work. And Because LLMs are owned by private corporations, their durability depends on the market. If the market collapses and Big Tech goes belly up, and they absolutely can, then AI may become a casualty of an economic crisis. So like you, I don’t see a future where AI takes over the world and kills all the people. It will file for chapter 11 long before that because it will have put all its own customers out of work. AI may be what frees us from our tech overlords—by accident.
Awesome and very interesting perspective. From your mouth to the gods' ears! Yeah, even though we think we're so smaht (sic intended), humans tend to be myopic and, especially if there's a vested interest somehow, not see potential flaws or if they do, to rationalize and explain them away somehow. This whole AI thing seems to be looming just a little bit too large and being given outsized possibility and capacity imho, which then generates outsized concern. Hopefully you are correct!
I agree. I also think that the outsized concern is part of a marketing ploy to increase visibility of the technology, thereby increasing investment and share prices, as the technology has yet to turn a profit. So really, it's all hype. The person who claims to be the "founder of AI" giving interviews on 60 Minutes and similar platforms, predicting the end of the world, gets paid millions to do so. It's a giant farce, and the joke's on us— or at least the people who invest.
So true the joke is on us. My husband, who spent years in the finance/brokerage industry, is fond of saying "if you don't know what the product is, YOU are the product." Such is the world we live in now thanks to the feudal tech overlords.
"People who do hiring will no longer care what your GPA or SAT score was or what prestigious university you went to. None of it will carry any weight because it will be assumed you did not do the work but rather let AI do it for you, and so your fancy degrees won’t be worth the paper they’re printed on."
Rather than this leading to a collapse, I see it leading to some really unfortunate roads that sustain the system. But not in a good way:
Hiring managers can often ignore the degree and do a modicum of in-person testing during interviews to assess real reading comprehension and writing ability. Or specific skills. If they do that, it further lengthens job interview processes. It used to be one interview. Maybe introductory jobs will require a series of three interviews: Two controlled-condition tests, and maybe a third where you finally talk to human beings.
Which would suck and be pointless, but it's not like we're avoiding pointless things that suck with regard to technology right now.
The bigger issue: People don't get hired because of degrees, and so people aren't as likely to be hired with higher "interview scores" no matter what hiring process is adopted. People get hired because of networking, which is largely built on the base of the school you went to. Hiring managers get word of mouth, make a decision, and then begin the application process headed toward a conclusion that's already been made.
And if that's the case for managers, and we really don't care that much about people learning in college as much as who they make friends with, then why not just go whole-hog? Let's just drop the curriculums students want to cheat at anyway and make college a series of organized teambuilding exercises.
You can only take Binge Drinking once, but you can take Sophisticated Drinking multiple times since there are lots of snooty drinks to learn about. Polite Media Discourse for Polite Company can also be taken multiple times, especially after a new season of White Lotus. Maybe branch out with some physical activities like Paintball to be well-rounded. The really tough course would be Using AI to Write, which would consist of training on AI, not writing. Obviously. And would usually be taken the same semester as Binge Drinking.
Oohhhhh....so much there to chew on. Love it. Thanks for reading and taking the time to comment!
In the estimation of many in the U.S. (and by extension in western Europe if my conversations with colleagues there are any indication), the college/university experience is already well on its way to being what you describe here. Lord knows, the "corporate" model of education has become increasingly about pumping out workers into the economy and not about creating an informed, well-rounded citizenry, but it's been on that track for 100 years.
But to your point about the worth of college being for networking....even the main student anecdote featured in the article I mention in the piece about students using ChatGPT to cheat their way through college, when asked by the interviewer if they saw university courses as such a waste of their time (hence the use of AI to get by in them), why bother to try and get into an Ivy League school anyway, which they had done several times. The answer? Because that's where you find two things: your wife and your start-up partner.
I have served on many hiring committees in higher Ed, and the system there is very different, at least in the public institutions where I've taught. They go to great lengths to create fair hiring that is not based on nepotism or cronyism. But I know you are correct for the private sector where the old adage "It's not what you know but who you know" that is the rule rather than the exception. That's probably fine in most cases, but I'm not the first to admit that when it comes to things like the physician who is going to do a surgery on me, or the engineer building my house or local bridge, that I'm gonna side with what they know. There's still gotta be room for that in this new landscape. As a friend who gave up academia to become an EMT recently told me, "At least for now, AI can't do sutures or put a cast on a broken limb."
But you're not far off in your comments about going whole hog in watering down the curriculum. You may have seen just in the past day or two that the San Francisco school board proposed "Equity Grading" for the K-12 system there whereby students with a 40% test score could get a C and 80% test score would get an A (those are generally currently grades given for 70% and 90% proficiency, respectively). They can pass with a D with as low as a 21% score. They are also, at least in Palo Alto High School, getting rid of Honors courses because they unfairly discriminate against underachievers in the eyes of some. So the diluting of education is intended to begin even earlier than your college Binge Drinking course, all in the name of equity. Imho, that type of "equity" only creates a race to the bottom, and I'm not alone. There was so much pushback, that the school district has had to rethink things: https://www.cbsnews.com/sanfrancisco/news/san-francisco-grading-for-equity-backlash-sfusd-backs-down/
AT one time I thought social media was getting out of hand (quite a few years ago!) and that it would all implode under the weight of rabid keyboard warriors who didn't care what they said to who or any consequences that may result. Obviously it hasn't imploded, and while I might suspect a similar fate for AI, neither will that, I suspect. So it certainly makes sense to incorporate it into teaching models so that if/when used, it can be done in a way that actually adds to the learning experience rather than helps the student avoid it. I agree that the intention should always be to actually be able to put one's achievements on paper into practice when the time comes. I just worry about the years of teaching and learning that get wasted by the less dedicated learners because of reliance on AI to do the work. - Kerry.
Thanks for your thoughts Kerry! I definitely share your worry in your last sentence there about all the teaching and learning. Maybe we can hope for a backlash of some sort, but I fear that would only be successful for a while. This shift is too big, so we must learn to cope, I think, and work with it. Another colleague just told me this morning they know of an instructor who is "getting in the AI game" by using it to grade assignments. Yikes! I'm not sure about that, but as I said in my piece, I think we're in the phase of grasping at straws a bit.