I'd just like to add that what George is talking about sounds like science fiction but it began around the late 2000s and has already occurred. There are concrete, factual examples of deployment of all the technologies above on the record.
I have gotten several. My favorite was when ever ad thought I was a business owning black man. Only a third right on that.The online ads thought that I was Indian (I'm not) and wanted a bride (I don't) for quite some while. They often erroneously think I'm Muslim, that I travel abroad regularly, that I drive a car, that I'm female, that I neeeed to know one thing...
Most satisfactory.
And while you might see this as a "failure" of AI, these data, and your response (or lack thereof), are reported back and used to refine the data model. Think of them, not as failures, but as questionnaires you answer by purchasing or not purchasing. The fact that you see ads in Spanish may have nothing to do with what languages you speak. You may simply have watched a certain soccer game, or listened to a Julio Iglesias song from the 70s, for which a significant cross-correlation exists in the ever-evolving data model being applied to you in that instant. Outliers, exceptions and mistakes are features, not bugs, and they're used to enhance the model moving forward.YouTube keeps serving me ads in Spanish. I do not speak Spanish.
Amazon regularly makes recommendations that make no sense based on my interests or my purchase history.
.
And that would be great if the system was correcting itself, but it's not. I've been getting Spanish ads for months. This is a minor thing, but it's the perfect example of AI hallucinating. The system is wrong but it's confidently wrong, which is the worst way for a system like this to act.And while you might see this as a "failure" of AI, these data, and your response (or lack thereof), are reported back and used to refine the data model. Think of them, not as failures, but as questionnaires you answer by purchasing or not purchasing. The fact that you see ads in Spanish may have nothing to do with what languages you speak. You may simply have watched a certain soccer game, or listened to a Julio Iglesias song from the 70s, for which a significant cross-correlation exists in the ever-evolving data model being applied to you in that instant. Outliers, exceptions and mistakes are features, not bugs, and they're used to enhance the model moving forward.
I agree, but I mostly worry about what happens when they get it right but use it for nefarious purposes.And that would be great if the system was correcting itself, but it's not. I've been getting Spanish ads for months. This is a minor thing, but it's the perfect example of AI hallucinating. The system is wrong but it's confidently wrong, which is the worst way for a system like this to act. problems when people rely ...
I don't want to get banned by putting up a political post so let me just say this: Assume that systems like this will be used for nefarious purposes.I agree, but I mostly worry about what happens when they get it right but use it for nefarious purposes.
Significant financial resources? Significant raw materials? Significant Power generation? Significant infrastructure advancement? Significant demand for the ROI?So all you tech geek entrepreneur people, get cracking. There's only one thing in the way. Who can guess what that is?
Again, agreeI don't want to get banned by putting up a political post so let me just say this: Assume that systems like this will be used for nefarious purposes.
www.nationthailand.com
Concerned as well, but again, already happening. For quite a while now.I'm far more concerned about the implications to data provenance than I am about skynet. If you think the Mandela effect is crazy now, just wait. Undetectable information manipulation is wild.
I seen those, but in this case if you click on the text they at least provide links to the right, which you absolutely ought to follow to attempt to locate a primary source.Google searches give a glib AI summary of all manner of things but... no references. They've happily sucked up fiction into their datasets and I've seen evidence of that; given the fictional narratives government and media pump out, the AI will exponentially veer off course from reality. Junk, at the moment.
I think the primary issue keeping AI from becoming a ubiquitous tool for good (hopefully), is trust. Tech firms have a well earned trust deficit. But when they can create trust, presumably trust that data used by AI to help people will remain private, there's an opportunity for many people to benefit from the technology..Significant financial resources? Significant raw materials? Significant Power generation? Significant infrastructure advancement? Significant demand for the ROI?
It's all really a house of cards right now.
Were you in a dorm with David and Leslie Newman? That is the basis for the plot of Superman 3. Richard Pryor skimming the half pennies from banking transactions.About a million years ago, in the late seventies, we used to sit around the lobby of the dormitory and some would theorize what if the new bar code scanners in the grocery stores were set to overcharge a half of one per cent.
