Originally posted by french frank
View Post
Grumble Thread
Collapse
X
-
It may be passing into common parlance and therefore prone to the misunderstanding and misuse that inevitably follows, but that is not where or how it originated. I found several scientific references but this is a summary.Originally posted by Dave2002 View Post
I think that’s a conveniently journalistic term - not really a technical term at all, though easily picked up by some who don’t really understand at all.
https://en.wikipedia.org/wiki/Halluc...l_intelligence)
As one source said, there is an argument for 'confabulation' being a better term, but 'hallucination' is probably too well bedded in now.
Comment
-
-
That article is interesting - and does seemingly show that many people are unhappy about the particular use of the term - and that includes me. My feeling is that at the current time AI systems do not feel, or think, or imagine things in the same way that human beings do. Humans who hallucinate presumably do think that what they are [perhaps] imagining is in fact "true".Originally posted by oddoneout View Post
It may be passing into common parlance and therefore prone to the misunderstanding and misuse that inevitably follows, but that is not where or how it originated. I found several scientific references but this is a summary.
https://en.wikipedia.org/wiki/Halluc...l_intelligence)
As one source said, there is an argument for 'confabulation' being a better term, but 'hallucination' is probably too well bedded in now.
OTOH I don't believe that AI systems currently have that degree of misplaced certainty. I don't particularly like the word "confabulation" either, but at least it represents an attempt to describe a non human approach to misinformation.
Apart from just plain old errors, another reason why AI systems may demonstrate this kind of output is that they may have been trained on a biased training set - either accidentally, or in some cases deliberately. Then given the "evidence" it is perfectly plausible that they are not "hallucinating", but simply making what seem to be the best judgements given biased and unreliable inputs.
Comment
-
-
Indeed, and comes back yet again to 'garbage in garbage out'?Originally posted by Dave2002 View PostThat article is interesting - and does seemingly show that many people are unhappy about the particular use of the term - and that includes me. My feeling is that at the current time AI systems do not feel, or think, or imagine things in the same way that human beings do. Humans who hallucinate presumably do think that what they are [perhaps] imagining is in fact "true".
OTOH I don't believe that AI systems currently have that degree of misplaced certainty. I don't particularly like the word "confabulation" either, but at least it represents an attempt to describe a non human approach to misinformation.
Apart from just plain old errors, another reason why AI systems may demonstrate this kind of output is that they may have been trained on a biased training set - either accidentally, or in some cases deliberately. Then given the "evidence" it is perfectly plausible that they are not "hallucinating", but simply making what seem to be the best judgements given biased and unreliable inputs.
Comment
-
-
Probably part of a cunning plan to boost the ratings for BBC News. It would appear that GB News is now more popular than both the BBC News Channel and Sky News. It may be hard to believe now, but back in the 1970s ITV's News At ten regularly featured at least once in the Top 10 TV programmes every week.Originally posted by Dave2002 View Post
There do seem to have been some valid concerns, though why there have now been resignations at this stage does make one wonder.Last edited by LMcD; 09-11-25, 23:45.
Comment
-
-
Presumably in response to the training set issue. What is put in is not necessarily garbage, but biased, so the responses of the trained AI systems will reflect that bias.Originally posted by oddoneout View PostIndeed, and comes back yet again to 'garbage in garbage out'?
This has been noted for example regarding face recognition, and yet again raises questions about policing, which may be biased towards - or against - certain racial groups.
Comment
-
-
Bias is receiving a lot of attention, understandably, and is certainly important, not least for unintended consequences, eg in medical matters. There is also though a great deal of just plain garbage - factually incorrect material distributed incontinently across multiple platforms - which risks being sucked into the system, in addition to the dubious status of more and more scientific research being done now.Originally posted by Dave2002 View Post
Presumably in response to the training set issue. What is put in is not necessarily garbage, but biased, so the responses of the trained AI systems will reflect that bias.
This has been noted for example regarding face recognition, and yet again raises questions about policing, which may be biased towards - or against - certain racial groups.
Comment
-
-
It's not that they don't think, feel or imagine in the same way as us, it's that they don't do any of those things at all. If you type in a prompt, it will give you a probabilistic response based on the enormous amount of data it has to work with, nothing it does involves 'thought' as we understand it. Sometimes what you ask falls down the cracks of what they are able to deal with so you get something that sounds authoritative but is either wholly false or (most annoyingly) a mixture of fact and what it believes to be true. It's a bit like weather data: it is often remarkably accurate, but it's dealing with probabilities not certainties so you can sometimes find it pouring with rain when it's supposed to be sunny.Originally posted by Dave2002 View PostMy feeling is that at the current time AI systems do not feel, or think, or imagine things in the same way that human beings do.
It's just a highly sophisticated algorithm responding to your input, it's not 'intelligence' in the way we understand it.
Comment
-
-
In a nutshell! It knows nothing, it just ploughs through the avaible zettabits (?) to see if there's something which apparently fits your enquiry. Up to you to discover how well it does that.Originally posted by Darkbloom View PostIt's just a highly sophisticated algorithm responding to your input, it's not 'intelligence' in the way we understand it.
I actually do use Google AI, not for the answers it gives but for the sources from which it gleaned its information. This frequently turns up the kind of sources which I (in my ignorance) would trust e.g. peer-reviewed research papers.It isn't given us to know those rare moments when people are wide open and the lightest touch can wither or heal. A moment too late and we can never reach them any more in this world.
Comment
-
.
Comment