I agree, the industry is completely overleveraged and there simply isn’t enough utility or income to justify the billions invested, nor the crazy valuations of these AI companies. But that will have to end sooner or later and then I suppose actually profitable AI companies will have to surface, but I don’t believe it’s going away.My personal opinion regarding AI and LLMs is that they do not offer significant value to work that justifies these large tech companies spending billions on the technology and putting up data centers left and right without oversight. But because big tech has gambled so big on this they're force feeding AI to everyone. Now I cannot control what my employer wants us to use, nor can I control whether Apple chooses to force their AI onto my phone, but I did jettison Windows for Linux and I havdnt been happier.
I’d want someone to check the AI’s assertions. Having been using AI assisted searches for a personal project, I’ve found it decidedly mixed. It blithely makes categorically wrong assertions. Never has it responded with a “to the best of our knowledge” qualifier.I agree with Bill Bupert that one of AI’s first jobs should be to check all the “peer reviewed” papers’ content and references that give supposed credibility to the faculty, administration, staff, and current direction of thought to America’s tertiary education system, both in liberal and STEM studies.
That would be some interesting “fact checking” and a good indicator of why we’re where we are, and even more concerning, where we’re going.
Absolutely, and if folks think what we the general public can touch is an true indicator of where the tech’s level actually is, then I’d posit they’re sorely mistaken.What’s currently happening is field testing on the populace. There’s a long way to go.
Up until 5 years ago I was obsessed with technology - getting into arguments about what's better apple vs PC, android vs iphone; religiously following Update roumors and logs; switching phones, laptop and cameras every year... Then i needed a laptop for video editing so i got the then new 14" apple silicone MacBook and an iphone, although I was the biggest apple hater. Bought myself a motorcycle and got into camping...and stop caring that much about technology... I still use it, i still know my stuff, but no obsessions - i use it as a tool and if it still works well enough, I am happy.My personal opinion regarding AI and LLMs is that they do not offer significant value to work that justifies these large tech companies spending billions on the technology and putting up data centers left and right without oversight. But because big tech has gambled so big on this they're force feeding AI to everyone. Now I cannot control what my employer wants us to use, nor can I control whether Apple chooses to force their AI onto my phone, but I did jettison Windows for Linux and I havdnt been happier.
Almost all scholarship cites the sources so there is no reason why someone who questions the thesis of a paper can’t to do their own fact checking. AI is a crutch, and a bad one at that.I agree with Bill Bupert that one of AI’s first jobs should be to check all the “peer reviewed” papers’ content and references that give supposed credibility to the faculty, administration, staff, and current direction of thought to America’s tertiary education system, both in liberal and STEM studies.
That would be some interesting “fact checking” and a good indicator of why we’re where we are, and even more concerning, where we’re going.
I may be blue collar, but I fully understand how “scholarship” ie journal and peer reviewed studies are formatted.Almost all scholarship cites the sources so there is no reason why someone who questions the thesis of a paper can’t to do their own fact checking. AI is a crutch, and a bad one at that.
Oh, I agree, plagiarism and un-cited sources exists in research and scholarship, hell I found some examples myself when I was working on my masters degrees in History and LIS. But my original point still standsI may be blue collar, but I fully understand how “scholarship” ie journal and peer reviewed studies are formatted.
There are many examples of plagiarism and citations to nowhere, among many top level administrators and “leaders” in their fields, but the full extent of the problem would take millions of man-hours to unravel. This is exactly the kind of job AI could (and will be doing), and dropping such cumulative results would make more of an impression to the general public than the easily batted away, one at a time revelations that disappear inside the circles of academia when brought forward currently.
The gatekeeping that is done to “protect their own” for the good of all is a detriment to what principles most had when they entered the pool.
Absolutelyhere in the US we are so completely divided I have a hard time believing tgat any organization would remain unbiased in that project.
AI can be a useful tool in some situations with human oversight. Its not a miracle tool that solves all of our problems and everyone should be skeptical of its use for at least the next decade.
"Open the pod door, HAL."This is worrisome. What do you when you can’t shut it down?

