ChatGPT

Log in

SmokingPipes.com Updates

Watch for Updates Twice a Week

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

PipesMagazine Approved Sponsor

karam

Lifer
Feb 2, 2019
2,341
9,010
Basel, Switzerland
Anyone here following this thing? If not then google it, if yes I’d be interested to hear your thoughts. Essentially it is a chatbox supported by an extremely powerful deep learning AI.

I played with it a bit, this thing is genuinely scary, huge opportunity for time saving. Scary because it can do a good 20% of my job right off the bat, and this is just the beginning of powerful AI for public use.
 

brian64

Lifer
Jan 31, 2011
9,602
14,666
Ever since we first started hearing a lot about the coming of "AI" several years ago, I've expected it to be something that was not really objective at all, but was actually just controlled by its creators and filled with the biases that they wanted it to have. Well, turns out that's exactly what this thing is. Yes, it apparently has quite impressive abilities, but it's already been shown to be ridiculously, embarrassingly, stupidly socially and politically bias in the most absurd ways.

Why am I not surprised.
 
It really happened to me today!

I was trying to get some thoughts around a problem, and decided to ask for help from a few ex-colleagues (Who are now competitors)

Since they are competitors there is a limit to what they can help with, even though they are considered experts. One of them, just for fun asked my question to chatGPT.

It was level 0 (As in I have to research further on the 6/7 points it gave). Having said that, it was super accurate (I am familiar with the subject and know what is accurate)

Next time, I should ask chatGPT directly 😊
 
  • Like
Reactions: karam

Winnipeger

Lifer
Sep 9, 2022
1,288
9,667
Winnipeg
I'm an expert in music theory. It has given me all kinds of wrong answers when I've tested it on moderately advanced topics. I think a big problem with it is, it can easily provide misinformation to people who don't know any better. Students will be able to use it to cheat on essays, but they better check their facts using better sources and spend a lot of time learning how to prompt it properly. It's not reliable.

The fact that it can spit out conversational English on millions of topics in almost real time is amazing though. No doubt about it.

I started using it to teach myself how to write computer code using Python, and basic computer science concepts, just to see how it would do. It can be pretty useful already. It's very good at writing and explaining computer code. I think text books will (and should) become obsolete. I'd rather use an AI that can adapt to my learning and clarify topics on demand. Just as long as it has its facts straight!

I watched a youtube video where a physics professor gave it a 1st year undergrad physics exam. Its answers were massively inconsistent. Sometimes it obviously didn't understand the physics concept but nevertheless came up with the correct answer. (Like: it said the maximum kinetic energy in an oscillating system occurs at the maximum displacement. :rolleyes:) Other times it would understand the concept but its math would suck. It's also not very good at playing chess, but anyway, that's not what it's designed for and there are plenty of other systems that can beat any living human at chess. But this one physics professor on youtube was blown away by its coding ability. It can write code, debug itself, reformulate the same script, and explain what it's doing in plain English. You can ask it to write you a program to perform any task in plain English and it will spit out the script in milliseconds. I'm not an expert in computer coding by any stretch, but it looks impressive. Seems like it will put a lot of coders out of work in no time. BUT. You'll always need someone smart enough to prompt the thing properly and to check what it's outputting for bugs and logic. Future AIs, I assume, will only get better. But, like autonomous vehicles, it's going to be a while still until they're smart enough to function in the real world the way human beings do. I wonder how far off that is. Probably a long long way if ever.

[I just logged in and it says they've updated it with "improved factuality" as of January 30th. I guess we'll see.]

I just asked it to write me a paragraph on how AI will take jobs away from computer programmers. I quote:

"Artificial Intelligence (AI) is expected to automate many tasks previously performed by computer programmers. As AI technology continues to improve and become more widespread, it is likely that some programming tasks will be automated, potentially leading to job loss for some computer programmers. For example, AI can write code faster and with fewer errors than human programmers, making it easier for companies to automate repetitive tasks and reduce their need for manual labor. However, it is also important to note that AI will create new job opportunities in fields such as AI development, deployment, and maintenance, as well as other areas where human skills such as creativity and critical thinking are valued. Ultimately, the impact of AI on the job market for computer programmers will depend on the speed of its adoption and the ability of workers to adapt to the changing technological landscape."
 
Last edited:

PipeWI

Starting to Get Obsessed
Jan 30, 2023
219
2,011
Somerset WI
Its responses to questions I've posed it -- that I would ask of undergraduates -- produce what I'd consider C-grade responses. It's repetitive and while its responses are well formed they are pro forma (sadly, like much undergrad writing). But -- oh my! -- it completely makes up quotes from the texts you ask it to engage! Nonetheless, as AI, it will learn from the questions posed to it, so it will be interesting to watch.
 
  • Like
Reactions: Briar Tuck

Yadkin1765

Starting to Get Obsessed
Nov 28, 2022
120
475
Maine
Ever since we first started hearing a lot about the coming of "AI" several years ago, I've expected it to be something that was not really objective at all, but was actually just controlled by its creators and filled with the biases that they wanted it to have. Well, turns out that's exactly what this thing is. Yes, it apparently has quite impressive abilities, but it's already been shown to be ridiculously, embarrassingly, stupidly socially and politically bias in the most absurd ways.

Why am I not surprised.
I’ve heard people say this, but haven’t seen any fair examples of it beyond random variations in the programs response. Can you point me to any studies or articles on this?

I always thought it really couldn’t help but function quite independently of it creators (unless literally programmed to do otherwise) based on the nature of its processes. What political biases do its creators’ have?
 

Winnipeger

Lifer
Sep 9, 2022
1,288
9,667
Winnipeg
as AI, it will learn from the questions posed to it, so it will be interesting to watch.
Actually it doesn't learn from engagement with users. It bases its responses on a data set and its knowledge terminates in 2021.

I asked it about just that point. Quote:

"As a language model, I am not capable of actively learning in the traditional sense. However, I can generate answers based on the patterns and relationships learned from the vast text data I was trained on. Interactions with users can sometimes uncover areas where my training data is limited or provide new perspectives on existing information, but I do not store or retain any information from these interactions."
 
  • Like
Reactions: Yadkin1765

brian64

Lifer
Jan 31, 2011
9,602
14,666
I’ve heard people say this, but haven’t seen any fair examples of it beyond random variations in the programs response. Can you point me to any studies or articles on this?

I always thought it really couldn’t help but function quite independently of it creators (unless literally programmed to do otherwise) based on the nature of its processes. What political biases do its creators’ have?


It's so painfully obvious from these examples that only someone suffering from the most willful ignorance of their own bias and double standards would deny it.

 
  • Like
Reactions: Briar Tuck

Rockyrepose

Lifer
Oct 16, 2019
1,324
13,238
Wyoming USA
I read somewhere by someone on some internet on some planet it was bias. I think it might have been a YouTube headline I didn't click on. Like everything else of e value it'll get gobbled up by the money and tweeked to the narrative of the ownership.
 
  • Like
Reactions: Yadkin1765

karam

Lifer
Feb 2, 2019
2,341
9,010
Basel, Switzerland
Good responses.
I was told by a friend who is a techie that it IS good at coding, I know nothing of coding.
For “a good start” in terms of business questions I’ve posed it it is good. It can save loads of time finding facts which are a pain for me to find otherwise (like medicine prices in various countries), not that I can’t do it myself, but good for saving time. For interpretation it is naturally not very good, but the key thing for me is that it shows us a glimpse of what future AIs will be able to do.
 
These are NOT real AI. They simulate AI, and in some rudimentary way is very similar. Similar enough that IBM redefines AI to include these things.

What scares me more than HAL 9000 taking over a spaceship is IBM marketing these things to customer service everywhere to cut down on call centers. It is frustrating enough trying to understand and make someone understand me that doesn't speak my language well... but, now I will have to deal with a a computer that doesn't really think or reason, but merely responds to conversational phrases.
When a company considers CS to be so invaluable that they are willing to pay that branch the least and chisel it down even further... that tells me that the company doesn't value me.
 

Swiss Army Knife

Can't Leave
Jul 12, 2021
406
1,196
North Carolina
Like all these AI things that have sprung up it's another tool in the toolbox, albiet a very powerful one. The only people that will get replaced are the ones that don't understand it and want to pretend it doesn't exist. Feels a lot like the early days of digital art tools in the 90s/early 2000s right now.
 

brian64

Lifer
Jan 31, 2011
9,602
14,666
Just when you thought it couldn't possibly get any more absurd (just can't make shit like this up LOL).

There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Yes, really.


 
  • Like
Reactions: Zack Miller