I think it's something very similar to the Tesla "autopilot" thing where people believe it more capable than it is. You see so many stories of Tesla wrecks from people engaging autopilot thinking the car can totally drive itself without needing to be monitored, then they take a nap or start dicking around on their phone, because people are lazy and stupid.IMO one of the big issues with using AI is poor prompt engineering. If you ask for an example of something e.g. case law pertaining to xyz
You may get a fabricated response or an example of what the thing you are looking for would/could/might look like. If you are more specific, tell it you want examples of existing xyx with source links, suggest were to look and be explicit about not wanting fabricated results you have a much better chance of finding what you want. Then it’s a matter of using your own expertise and fact checking to parse BS from good info. AI tools are just that - tools. They aren’t a magic wand and shouldn’t be used as such. I have found that they can make research and report writing much easier but they can’t be relied upon to do all the work (even though that’s how a lot of people try to use them)
This is just my take, I’m sure others will not agree.
I've seen people before on threads about AI who claim they work in AI (which I doubt given what they claim) and that "it definitely is capable of logical thinking", which any reliable article will tell you they, particularly the LLMs (your ChatGPT and Gemini's and whatnot), absolutely are not capable of logic or thinking. They'll tell you exactly how they work... through math and tokens, different tokens have different values and a given token may have a higher or lower value based on context, and all it's doing is being basically the same thing as your phone keyboard's predictive text on steroids.
I agree, often times it's more like... AI is a screwdriver and people are trying to use it like a chisel. That may not be the best analogy, but it's the first I could think of.
My big concerns aren't so much what it is capable of or not, it's more that people will get even more brain-rotted than they currently are. In other words the problem isn't the technology, it's people and how they use it.
And that's not to say there's not other issues on the AI side, like theft of intellectual property and the companies that make them, or their resource hogging, etc.







