AI - Cheating or just evolution?

This forum is for anything not Reason related, if you just want to talk about other stuff. Please keep it friendly!
PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

20 Apr 2025

It has been years and the same basic flaws persist. At a certain point we should be able to say it isn’t worth the effort and it’s easier to pay an expert to do it for you or learn it yourself. I’m lazy and my budget is limited. I would offload a ton of work if this LLM crap worked! But it still doesn’t! One wastes more time using it, training and explaining and checking the result only to realise it didn’t work and was all a waste of time in the end and no time was saved! And all the waste caused by it all. In the end there’s only a net negative left. There’s no benefit in wasting time AND resources and no work done to show for in the end. 🤷‍♂️

Machine learning can potentially result in useful, specific tasks like you say, but I doubt it’ll ever be a capable mix assistant or anything like that beyond what’s currently out there in things like neutron and ozone. That requires a certain level of intelligence, and you guys who’ve read this thread already know my opinion on that. But the fact it can plentifully become useful for specific tasks begs the question: which are these tasks? Are they tasks worth automating? The trend seems to me that the creators of this tech want to automate the fun. creative, meaningful stuff, instead of automating the boring stuff.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

20 Apr 2025

AI is being used privately by organisations to great effect.

They use LLMs differently as well. They are a poor tool when used on their own. Instead, smart organisations are using LLMs to construct things like knowledge graphs and combining them with other methods of AI.

User avatar
bxbrkrz
Posts: 4111
Joined: 17 Jan 2015

20 Apr 2025

PhillipOrdonez wrote:
19 Apr 2025
bxbrkrz wrote:
19 Apr 2025
stupid.ai? No wonder.
Should be called SS; synthetic stupidity.
That too.
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

20 Apr 2025

avasopht wrote:
20 Apr 2025
AI is being used privately by organisations to great effect.

They use LLMs differently as well. They are a poor tool when used on their own. Instead, smart organisations are using LLMs to construct things like knowledge graphs and combining them with other methods of AI.
Until they realise it’s all wrong cause the thing hallucinates more than me on acid.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

20 Apr 2025

PhillipOrdonez wrote:
20 Apr 2025
Until they realise it’s all wrong cause the thing hallucinates more than me on acid.
That's what the Knowledge Graph is for.

See, LLM's do nothing but hallucinate. There's nothing in their architecture that makes it well suited to understanding or coherency.

LLMs are statistical models for language originally designed for translation, between languages; not sentience.

Like other statistical models for language, they can produce streams of text that look like they make sense. LLMs are better than previous statistical models (e.g. Markov Chains) and can output streams of text that make sense for longer.

Knowledge Graphs (also referred to as DAGs, which is a method of implementing KGs), on the other hand, are inherently coherent. They encode discrete knowledge/relationships/facts, e.g. (this own that, that likes this, etc.)

Used together, LLMs can be used to construct Knowledge Graphs before they hallucinate, as well as interpreting inputs/questions/etc to query KGs to get stored knowledge, and then translate the stored knowledge to a response.

Because KGs store discrete information in a non-statistical manner, they can be used very well on conjunction with each other.

Now, as for hallucinations, ... if you analyse debates and arguments, you'll find something similar is very common in human communication (both in transmission of thoughts and interpretation of them).

They are referred to as filters, distortions and generalizations.

Hallucinations aren't a problem in themselves. The problem with LLMs is that they've become so good that we trust them enough to operate without being backed up with real-time coherence checking or driven by motives and or an intention (though the latter two can be prompted).

In time, they will be used in more effective ways and combined with other methods.

Right now, there's a lot of laziness and trust in just letting the neural network model "figure it out". It's quite possible smaller models combined with other algorithms will gain more attention, and you'll see far fewer hallucinations.

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

21 Apr 2025

So you’re saying you’re using the thing that hallucinates to create the thing that is used to check for errors. Got it.

Higor
Posts: 126
Joined: 19 Jan 2015

21 Apr 2025

PhillipOrdonez wrote:
19 Apr 2025
Should be called SS; synthetic stupidity.

So... they are investing billions of dollars in a useless toy? :clap:

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

21 Apr 2025

PhillipOrdonez wrote:
21 Apr 2025
So you’re saying you’re using the thing that hallucinates to create the thing that is used to check for errors. Got it.
No, that's not what I said.

You literally just hallucinated :clap:


While LLMs can hallucinate, humans are just as guilty of it.


We should not feel smug about their failings, because we are flawed in similar ways, yet less able to answer across a broad range of fields.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

21 Apr 2025

PhillipOrdonez wrote:
19 Apr 2025
Tried using stupid Ai to help me analyse some receipts, thought it was awesome what it was spitting out, till I realised it was lying. How useful can this crap be if it can’t do simple shit like looking at text? It can’t be trusted, not even at this point, years in. What a croc of shit. 🙄 I don’t understand how are businesses using it for anything useful other than content creation for marketing… which is simple shit already… gah. I guess the only ones benefiting from using this crap are teachers creating Images to aid in teaching… and I guess, as per Re8et, coding (?) other than that, this is lame.
If you try to screw a nail with a corkscrew ...

If you ask a chess AI to interpret a receipt, obviously it can't do it.



What AI did you use for the task?

This sounds more like you just used the wrong tool for the job.

User avatar
bxbrkrz
Posts: 4111
Joined: 17 Jan 2015

21 Apr 2025

What an Hallucinately Fabulous thread...
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

21 Apr 2025

avasopht wrote:
20 Apr 2025
LLMs can be used to construct Knowledge Graphs
Did I hallucinate that?

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

21 Apr 2025

avasopht wrote:
21 Apr 2025
PhillipOrdonez wrote:
19 Apr 2025
Tried using stupid Ai to help me analyse some receipts, thought it was awesome what it was spitting out, till I realised it was lying. How useful can this crap be if it can’t do simple shit like looking at text? It can’t be trusted, not even at this point, years in. What a croc of shit. 🙄 I don’t understand how are businesses using it for anything useful other than content creation for marketing… which is simple shit already… gah. I guess the only ones benefiting from using this crap are teachers creating Images to aid in teaching… and I guess, as per Re8et, coding (?) other than that, this is lame.
If you try to screw a nail with a corkscrew ...

If you ask a chess AI to interpret a receipt, obviously it can't do it.



What AI did you use for the task?

This sounds more like you just used the wrong tool for the job.
I used ChatGPT.

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

21 Apr 2025

Higor wrote:
21 Apr 2025
PhillipOrdonez wrote:
19 Apr 2025
Should be called SS; synthetic stupidity.

So... they are investing billions of dollars in a useless toy? :clap:
Yeah, and they’re wasting it. Every prompt loses them money 😄👍

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

21 Apr 2025

PhillipOrdonez wrote:
21 Apr 2025
avasopht wrote:
20 Apr 2025
LLMs can be used to construct Knowledge Graphs
Did I hallucinate that?
Well:

1. I said something.
2. You said, "so you're saying ..." { gave your interpretation } " ... Got it."
3. Then I said, "No, that's not what I said. <new line> You literally just hallucinated."

Let's break it down.

"You're using the thing that hallucinates" = yes.
"To create the thing that is used to check for errors" = no.

The knowledge graph isn't used to check for errors. That's the hallucination.

But also, "using the thing that hallucinates" is a bit of a red herring. Yes, LLMs do hallucinate in varying amounts (just as humans do). Like humans, they hallucinate on some tasks more than others.

On some tasks, they "hallucinate" rarely (if at all).



Though, "hallucinate" is a bit of a hallucination in itself. LLMs don't "hallucinate" in the literal sense - so don't get too caught up in the term.



A cynic might say "hallucinate" is a term used to reframe errors in terms that are more palatable to investors and users to cover up the overreach of how they are being used.



Anyway, a knowledge graph is just a map of relationships, like:

1. A dog is an animal.
2. An animal is a living thing.
3. Plants are living things.
4. Cows are animals.
5. Cows eat herbs.
6. Herbs are plants.

You can build a knowledge graphs from a portion of text, e.g.
Wikipedia: Mila Jovovich wrote:Milica Bogdanovna Jovović[a] (/ˈjoʊvəvɪtʃ/ YOH-və-vitch; born December 17, 1975), known professionally as Milla Jovovich (MEE-lə), is an American actress and former fashion model.


That might produce something like:

1. Jovovic[a] is pronounced "/ˈjoʊvəvɪtʃ/ YOH-və-vitch".
2. "Milica Bogdanovna Jovovic" known-professionally-as "Milla Jovovich".
3. "Milla Jovovich" refers-to "Milica Bogdanovna Jovovic".
4. "Milla Jovovich" born-on "December 17, 1975".
5. "Milla Jovovich" is "American Actress".
6. "Milla Jovovich" was "fashion model".

If you ask a question like, "what else is Milla Jovovich other than an actress?"

The system will translate it into a bunch of queries into the knowledge graph (even if you've slightly misspelt the name), and find the node recognizing she was a fashion model, and can then respond with something like, "She doesn't seem to do anything else at the moment, but she used to also be a fashion model."



Before LLMs there were much more sophisticated methods for Natural Language Processing.

LLMs aren't the only way to build knowledge graphs, but they can be pretty effective - especially when context is required, for example if the text says something like this. "I saw John today. He gave me a pencil." You don't want to record that "he gave-me pencil" in the knowledge graph. You want to record that "John gave-me pencil".

LLMs have context baked into the architecture.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

21 Apr 2025

PhillipOrdonez wrote:
21 Apr 2025
avasopht wrote:
21 Apr 2025

This sounds more like you just used the wrong tool for the job.
I used ChatGPT.
Google's DocumentAI or Excel's PowerQuery might be better for that.

If you describe your problem on an AI forum or subreddit, they'll either point you in the right direction, or spend the afternoon writing the script for you 😉

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

22 Apr 2025

Gonna have to push back there against your claim of me hallucinating there. What you said on the quoted post (all of it not just the quotes I’m providing here, means those KGs are used by the LLMs to stay on script, or avoid errors (checking for errors).
avasopht wrote:
20 Apr 2025
PhillipOrdonez wrote:
20 Apr 2025
Until they realise it’s all wrong cause the thing hallucinates more than me on acid.
That's what the Knowledge Graph is for.

Knowledge Graphs (also referred to as DAGs, which is a method of implementing KGs), on the other hand, are inherently coherent. They encode discrete knowledge/relationships/facts, e.g. (this own that, that likes this, etc.)

Used together, LLMs can be used to construct Knowledge Graphs before they hallucinate, as well as interpreting inputs/questions/etc to query KGs to get stored knowledge, and then translate the stored knowledge to a response.

Because KGs store discrete information in a non-statistical manner, they can be used very well on conjunction with each other.
.
Sure people misunderstand each other in communication all the time. You think that and an LLM hallucinating is the same?

If I upload a shopping receipt and the LLM starts turning it into a table, which is basically copying and pasting text, but the things it puts on the table are items I’ve never bought, how is that the same as someone misinterpreting my words????? If you have the same task to an assistant and they started doing the same thing, you would never think “oh, they’re misinterpreting my words”, would you? Really?

0o1232
Posts: 6
Joined: 18 Apr 2025

22 Apr 2025


avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

22 Apr 2025

PhillipOrdonez wrote:
22 Apr 2025
Gonna have to push back there against your claim of me hallucinating there. What you said on the quoted post (all of it not just the quotes I’m providing here, means those KGs are used by the LLMs to stay on script, or avoid errors (checking for errors).
Well, it might be easier if you learned how LLMs, neural networks, and KGs work (as I just might be doing a terrible job of explaining 🤷).

Now, what I've written is not a complete description of how all of the systems work and work together, so it's inevitable that you MUST fill in the gaps between what I've written about the systems, and how the systems really work, right?

It's near impossible for your mental model to be correct based solely on what I've written, because I didn't use enough words (and diagrams) to fully communicate the intricacies of the systems.

Hallucination on your part is unavoidable. It's not a fault of your own, but a mere consequence of the communicative context (and the impracticality of people communicating verbosely enough to completely transmit a complete description of all involved phenomena).

But:
1. LLMs don't "use" KGs.
2. KGs don't necessarily "help them stay on script".
3. KGs aren't used to check for errors (or necessarily help them "avoid" errors).

What happens instead is that:
1. KGs are being used with LLMs (though they don't have to be used with LLMs).
2. The typical approach to using them together is that what you write is converted into queries into the Knowledge Graph database, a bunch of facts are spat out, those facts are presented to another LLM, which it then summarises.

Think of it as a sleight of hand, or a changing of responsibilities. Instead of relying on the LLM to construct an answer, it is instead used as a tool to find the answer (or possible answers), and then again to summarise the answer to you (but an LLM is less important at this stage, and more manual processes created by humans could be used).

But the error rate of LLMs in these tasks is much much lower than using LLMs on their own.

The real work is in building the Knowledge Graph (and in specific domains, they can be writen by humans or generated by LLMs and validated by humans).

For example, we had task to build data models from a long ass specification supplied to us in HTML4 documentation (usually done by hand and would have taken months). I wrote code to transform the HTML4 documentation into the data model. Errors in the HTML4 documentation (such as illegal characters, etc.) resulted in errors in the data model (plus some incompatibilities). A human (me) corrected those few errors manually.

While algorithms could "hallucinate", it actually did a lot better than humans, who frequently made mistakes.

Similar things are done when building Knowledge Graphs in business domains. When the corpus is prohibitively large, human validation is hard limited, and used to correct edge cases, course correct the way the algorithm does the transformation (to reduce erroneous KG representations), and evaluate the error rate.

Before LLMs, chatbots were built with handwritten scripts and mailmerge-like placeholders.

Anyway, the increase in accuracy (and reduction in error) is a consequence of the LLM not being used entirely, and instead they are used to merely translate the semantic knowledge to you as an answer - a task that yields fewer errors.

Now, naive systems that uses only LLMs to produce KGs, produce KG queries, and then summarise them to you is just less erroneous than building an LLM based on that knowledge to answer your question directly.

This is because converting text to semantic knowledge, and then summarising that semantic knowledge back to text (after being queried) is an inherently less ambiguous task than constructing a statistical model to perform the same function on its own.

Still, what I've written is not a complete course in neural networks, knowledge graphs (though KGs are much simpler to explain and to intuitively grasp), and transformer models (or whatever variant is being used by the newest version of ChatGPT).
PhillipOrdonez wrote:
22 Apr 2025
Sure people misunderstand each other in communication all the time. You think that and an LLM hallucinating is the same?
How are they any different? 🤔
PhillipOrdonez wrote:
22 Apr 2025
If I upload a shopping receipt and the LLM starts turning it into a table, which is basically copying and pasting text, but the things it puts on the table are items I’ve never bought, how is that the same as someone misinterpreting my words????? If you have the same task to an assistant and they started doing the same thing, you would never think “oh, they’re misinterpreting my words”, would you? Really?
LLMs don't read receipts. They process words (well, technically they process "vector embeddings").

What happens is that ChatGPT hands your image over to some image analysis system, which spits out whatever it was designed to spit out.

So, you could be using a version of ChatGPT that uses an image recognition system that only spits out summaries, e.g. "this is a shopping receipt showing the date, a barcode, and a Walmart logo".

A newer version of ChatGPT (which you might need to pay more for or change settings) might use a different image recognition system that outputs character recognition data, which an LLM could use.

The LLM itself can only work with what it is provided, hence why I suggested using other tools designed for that.

Higor
Posts: 126
Joined: 19 Jan 2015

22 Apr 2025

One thing is certain, and undeniable: human beings hallucinate a lot. :lol:

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

22 Apr 2025

avasopht wrote:
22 Apr 2025
Got it, thanks for the mini course. Did you write it yourself or got help from an LLM?

So there you have it folks, it is not that the thing sucks, it is that it is I who suck.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

22 Apr 2025

PhillipOrdonez wrote:
22 Apr 2025
avasopht wrote:
22 Apr 2025
Got it, thanks for the mini course. Did you write it yourself or got help from an LLM?

So there you have it folks, it is not that the thing sucks, it is that it is I who suck.
  1. Yes, I wrote it myself. I'm a Data Engineering consluttant by profession (with a background in the videogames industry).

    I've also supported a PhD study on energy predictions in Saudi Arabia using machine learning (and introducing my own novel methods to account for "hallucinations" and basically factor them out).

    So yes, I did write it myself for you.
  2. I never said "you suck" or anything like it. That, ironically is a hallucination. You've derived something from what I said that was not there and was not implied.

    Either you fully understand what someone has said, or you do not. It doesn't mean "you suck", and I would never suggest such a thing because, maybe I have just explained it poorly.

    This goes back to what I mentioned from A Mind Of its Own: How The Brain Distorts and Deceives.

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

22 Apr 2025

Of course you never said that I suck, I say that myself based on what you said. That’s not a hallucination, that’s reasoning.

avasopht
Competition Winner
Posts: 4120
Joined: 16 Jan 2015

22 Apr 2025

PhillipOrdonez wrote:
22 Apr 2025
Of course you never said that I suck, I say that myself based on what you said. That’s not a hallucination, that’s reasoning.
We all suck.

And that's why we have music and art - so that we can suck a little less ;)

PhillipOrdonez
Posts: 4348
Joined: 20 Oct 2017
Location: Norway
Contact:

22 Apr 2025

avasopht wrote:
22 Apr 2025
PhillipOrdonez wrote:
22 Apr 2025
Of course you never said that I suck, I say that myself based on what you said. That’s not a hallucination, that’s reasoning.
We all suck.

And that's why we have music and art - so that we can suck a little less ;)
Makes gesture with arms, showing everything around, inviting the reader to look at the current state of the world, keeping in mind the context of this thread. No words uttered. They’re unnecessary.

Yonatan
Posts: 1646
Joined: 18 Jan 2015

Today

Stumbled across an Ai thing luring to help finish tracks, not tried, bit sceptical, but kind of show what direction it will go. Logic introduced its Ai instruments in latest version. I only tried briefly on ipad but it did not impress me enough, felt boring, but ofcourse this is just the beta stages of Ai where they try include whatever they can to be in the hype.

https://aiode.com/

Now this is in beta, online, but seems plans for future plugin etc. Am not convinced of the quality compared to what one can do with tools from Toontrack, Ujam etc.
I expect to eventually see more Ai incorporated with companies like these and NI and other libraries.
But still yet some time before it actually will work as what we dream of. But give it 5 years to develop and adapt and we will have tools to actually help make our composing reach another level in arranging more complex stuff while still having more control over process and final adjustments.

At least the concept of Virtual instrumentalists will be what will be producers and artists focus. And yes, we will see Ai singers and artists pop up not only as backing vocals but as lead singers. Some producers will create Virtual artists with certain styles and choose freely for each project.
Also real artists and singers will become virtualized, commercially they will have royalty rights for use. Some will use and duplicate without allowed and pirating freely whatever. Some will get sued.
You will be able to copyright your voice and use Ai to scan Internet, while pirated can tweak and try fool the scanning Ai etc by blending different artists into new ones. What happen if blend MJ, FM ets.
Some big tech record label Ai detectors will be able to try trace such attempts etc.

But indie artist can clone their own style and voice and make Ai work in service of exploring new dimensions of their own, challange their own limitations and comfort zones but still within their domain of character.

Imagine working on a song idea, and try for fun, build your dream band and explore...either with manually customized players or bring in Miles Davis on a solo. And famous drummers, bassists, guitarists etc. Imagine playing live also with them.
Even try different Virtual producers and mixing engineers that has certain personal style.
Change equipments however you like.

What do you think? What will we see onward?

Post Reply
  • Information
  • Who is online

    Users browsing this forum: No registered users and 4 guests