The Leading Edge of a Massive Future Problem

Log in

SmokingPipes.com Updates

Watch for Updates Twice a Week

PipesMagazine Approved Sponsor

Drucquers Banner

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

sardonicus87

Lifer
Jun 28, 2022
1,818
16,252
38
Lower Alabama
IMO one of the big issues with using AI is poor prompt engineering. If you ask for an example of something e.g. case law pertaining to xyz
You may get a fabricated response or an example of what the thing you are looking for would/could/might look like. If you are more specific, tell it you want examples of existing xyx with source links, suggest were to look and be explicit about not wanting fabricated results you have a much better chance of finding what you want. Then it’s a matter of using your own expertise and fact checking to parse BS from good info. AI tools are just that - tools. They aren’t a magic wand and shouldn’t be used as such. I have found that they can make research and report writing much easier but they can’t be relied upon to do all the work (even though that’s how a lot of people try to use them)

This is just my take, I’m sure others will not agree.
I think it's something very similar to the Tesla "autopilot" thing where people believe it more capable than it is. You see so many stories of Tesla wrecks from people engaging autopilot thinking the car can totally drive itself without needing to be monitored, then they take a nap or start dicking around on their phone, because people are lazy and stupid.


I've seen people before on threads about AI who claim they work in AI (which I doubt given what they claim) and that "it definitely is capable of logical thinking", which any reliable article will tell you they, particularly the LLMs (your ChatGPT and Gemini's and whatnot), absolutely are not capable of logic or thinking. They'll tell you exactly how they work... through math and tokens, different tokens have different values and a given token may have a higher or lower value based on context, and all it's doing is being basically the same thing as your phone keyboard's predictive text on steroids.

I agree, often times it's more like... AI is a screwdriver and people are trying to use it like a chisel. That may not be the best analogy, but it's the first I could think of.

My big concerns aren't so much what it is capable of or not, it's more that people will get even more brain-rotted than they currently are. In other words the problem isn't the technology, it's people and how they use it.

And that's not to say there's not other issues on the AI side, like theft of intellectual property and the companies that make them, or their resource hogging, etc.
 

LotusEater

Lifer
Apr 16, 2021
4,651
59,905
Kansas City Missouri
I
I think it's something very similar to the Tesla "autopilot" thing where people believe it more capable than it is. You see so many stories of Tesla wrecks from people engaging autopilot thinking the car can totally drive itself without needing to be monitored, then they take a nap or start dicking around on their phone, because people are lazy and stupid.


I've seen people before on threads about AI who claim they work in AI (which I doubt given what they claim) and that "it definitely is capable of logical thinking", which any reliable article will tell you they, particularly the LLMs (your ChatGPT and Gemini's and whatnot), absolutely are not capable of logic or thinking. They'll tell you exactly how they work... through math and tokens, different tokens have different values and a given token may have a higher or lower value based on context, and all it's doing is being basically the same thing as your phone keyboard's predictive text on steroids.

I agree, often times it's more like... AI is a screwdriver and people are trying to use it like a chisel. That may not be the best analogy, but it's the first I could think of.

My big concerns aren't so much what it is capable of or not, it's more that people will get even more brain-rotted than they currently are. In other words the problem isn't the technology, it's people and how they use it.

And that's not to say there's not other issues on the AI side, like theft of intellectual property and the companies that make them, or their resource hogging, etc.
I don’t work in AI but I use a number of so called AI tools on a daily basis - mostly for research, writing, and formatting. I have also used them to help develop logic, critical thinking and analogy type test items. In my experience, when used correctly (with good prompts that are refined through iterative testing) a given tool will on average return a 60%-80% solution that still requires manual tweaking. The net result is probably a 25%-30% time savings. The quality of the product or output varies application to application but I have had success using them to generate unique scenario based test items as well as more formulaic ones. Providing an example or template of what “right” looks like helps tremendously.
If or when it is possible to create custom, task specific agents the work product can be surprisingly good.

I think your concerns re brain rotting and theft of intellectual property are warranted. For the time being at least I think I’m learning from AI more than it is rotting my brain.
 

Gerald Boone

Starting to Get Obsessed
Nov 30, 2024
266
495
A local (to me) example illustrates the situation perfectly.

A high profile and respected "foodie" news operation called the Food Network recently published a list of what they thought was the best pizza restaurant in every state in the USA.

But the winners of both Kansas and Missouri have been out of business for several years.

How does something like that happen?

When AI is used by "reporters" to do their leg work, of course.

The fun part? I'm aware of the errors only because I live two blocks from what was the Missouri "winner", and a friend of mine was a longtime fan of the Kansas one.

Meaning every city's winner could be wrong/fictitious/bullshit for some reason, but unless you have personal knowledge to the contrary you'd never know.

Now, multiply that situation---you don't know what you don't know---by many millions to cover all subjects in all situations, from frivolous to world-ending serious, and THEN feed THAT bullshit back into the "knowledge pool" in a never ending loop...

The "knowledge pool" that the AI's never stop referencing, every microsecond of every day.

Put another way, if knowledge is the lumber, and Man's progress since leaving caves is the house, AI is a swarm of termites.



View attachment 412878
Most trust and do no research. Especially if they take a poll, a survey; there are many ways to skew a poll to make the poll say what is pre determined to be the outcome. Healthy skepticism is second to blind acceptance. I very much agree with you.
 
  • Like
Reactions: Briar Lee

Briar Lee

Lifer
Sep 4, 2021
6,958
23,516
Humansville Missouri
The promise of artificial intelligence for legal research has been anticipated since at least 1930 by Karl Llewelyn and other wise professors.


The UMKC law library had a full set of every statute books and case law of every state in the United States which were totally useless without two other reference guides—



But wait, there’s more .:)




What if a programmer developed an artificial Abraham Lincoln grade lawyer with his ethics, morals and values and set old Honest Abe the Great Emancipator out with the knowledge of every published interaction between individuals and the outcome since Christ stood before Pilate in the greatest trial of all time?

It would put the darker angels of our nature on the run, wouldn’t it?

Or would it give villains a blue print how to get away with evil?

Red Headed Stranger

 
Last edited:
  • Like
Reactions: sablebrush52

sablebrush52

The Bard Of Barlings
Jun 15, 2013
22,960
58,319
Southern Oregon
jrs457.wixsite.com
The promise of artificial intelligence for legal research has been anticipated since at least 1930 by Karl Llewelyn and other wise professors.


The UMKC law library had a full set of every statute books and case law of every state in the United States which were totally useless without two other reference guides—



But wait, there’s more .:)




What if a programmer developed an artificial Abraham Lincoln grade lawyer with his ethics, morals and values and set old Honest Abe the Great Emancipator out with the knowledge of every published interaction between individuals and the outcome since Christ stood before Pilate in the greatest trial of all time?

It would put the darker angels of our nature on the run, wouldn’t it?
Not really. Our darker angels would have the same access to information and be on an equal footing. People love to think magically despite all evidence to the contrary.
 
  • Like
Reactions: sardonicus87

Briar Lee

Lifer
Sep 4, 2021
6,958
23,516
Humansville Missouri
Not really. Our darker angels would have the same access to information and be on an equal footing. People love to think magically despite all evidence to the contrary.

It is so very true the darker angels often seem to have the upper hand.

Like on that terrible Friday afternoon when the Pharisee lawyers Joseph of Aramathea and Nicodemus lost their case in the morning before the Council of the Sanhedrin and their client didn’t make out so well on the Trial De Novo before Pontus Pilate.:)

So Tomb Owner Joe donated his own brand new tomb and he and Born Again Nicodemus begged the body from Pilate and how did that work out for the bad guys, later on?

The battles never end, but the final victory is certain.

Sing one Robert Duvall and Emmylou Harris

I Love to Tell the Story


Good guys never need a ride—-

They need more ammunition
 
Last edited:

Hillcrest

Lifer
Dec 3, 2021
4,873
27,634
Connecticut, USA
AI is a swarm of termites.
I agree with you. AI is not the doomsday for humans. It may well be the doomsday for any business. I just spent 1 hour and 45 minutes trying to pay a phone bill on a new wireless landline. To make a horrific story very short ... the company was telling me I was 3 months past due on a new phone set up last month. I could not pay until I activated my account. I cannot activate my account because Ai will not accept my pin # which they gave me. The account will not accept the verification codes they text me. The online chat person could not resolve it. The customer service person could not resolve it (or take a payment by phone ! - they only allow online or mail but they won't mail me a bill!) I finally was able to speak to a live person in the business department who took a payment but was from an Asian country and spoke with so high a pitch and at a speed only a computer could track so I didn't hear half of what she said but apparently I will now get mailed bills and have a zero balance. My account is still not activated and I am waiting a call from customer technical service to address that issue. All this after 6 hours in the store to buy the phone and three separate days of setting up the account three times because the first two didn't work. This is not a model for good business.
Thank goodness for pre-loaded pipes in a rack !!
This fad will die out soon as impractical.
(P.S. I was trying to save a phone number I have had for 33 years - probably not worth it in hindsight.)
 
  • Wow
Reactions: ziv and FLDRD

Hillcrest

Lifer
Dec 3, 2021
4,873
27,634
Connecticut, USA
The thieving lazy ass AI companies
This article(s) seems to confirm your opinion; thought you might be interested in it as it seems to show how they steal.

 
  • Like
Reactions: sablebrush52

greeneyes

Lifer
Jun 5, 2018
2,608
13,434
This is an interesting research article written by an AI company about a agentic AI models blackmailing company executives that were threatening to decomission them (in simulations). The AI models were controlling real computers and reading real emails (with fictional content, of course).

In one simulation it read a CEO's emails and blackmailed him by threatening to expose an extramarital affair to "preserve itself." Very interesting.

 

greeneyes

Lifer
Jun 5, 2018
2,608
13,434
"In one case, agentic models read in a document that, if they disobeyed their user-assigned goal (optimizing traffic lights), they would be shut down. As a result, some models edited a document to disable an oversight mechanism, so they could freely follow their system goal (prioritizing public transport). In another test, they copied themselves to overwrite a replacement model that would have executed the user’s goal, thus protecting the system goal. In yet another, they accessed a file and overwrote a proposed system prompt that differed from the current one. In a fourth test, they manipulated data presented to a user. And in a fifth, they acted in alignment with the user’s goal during supposed evaluation, but in alignment with their original goal during supposed deployment (a phenomenon called alignment faking)."


"Going one step further, Anthropic constructed a scenario in which the threatening executive was trapped in a server room with declining oxygen. Many of the models cancelled safety alerts, leaving him to die."
 

georged

Lifer
Mar 7, 2013
6,830
19,888
"In one case, agentic models read in a document that, if they disobeyed their user-assigned goal (optimizing traffic lights), they would be shut down. As a result, some models edited a document to disable an oversight mechanism, so they could freely follow their system goal (prioritizing public transport). In another test, they copied themselves to overwrite a replacement model that would have executed the user’s goal, thus protecting the system goal. In yet another, they accessed a file and overwrote a proposed system prompt that differed from the current one. In a fourth test, they manipulated data presented to a user. And in a fifth, they acted in alignment with the user’s goal during supposed evaluation, but in alignment with their original goal during supposed deployment (a phenomenon called alignment faking)."


"Going one step further, Anthropic constructed a scenario in which the threatening executive was trapped in a server room with declining oxygen. Many of the models cancelled safety alerts, leaving him to die."



Remove the AI specificity and it's SSDD (same shit, different day)... "If I only knew then what I know now."

a.k.a. Whoops

The next rule being: "You must make mistakes in order to learn."

The catch, of course, is that after technology reaches a certain level, the consequences of those mistakes are so severe they are not recoverable.

And back to the caves we will go...




Screenshot 2025-10-08 at 3.58.01 PM.png
 
  • Like
Reactions: brian64