AI - Cheating or just evolution?

This forum is for anything not Reason related, if you just want to talk about other stuff. Please keep it friendly!
User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 05 May 2023

Higor wrote:
04 May 2023
All of this it's not a hype, it's a warning of what is to come, and faster than people might think. It's not a matter of "if", but "when".
It IS a hype, and your comment (and comments from other people here) shows it. "A warning of what is to come" and "faster than people might think" are blanket statements that are out of sync with the reality of what this technology actually is and what it is capable of (and more importantly, NOT capable of). You state those things, clearly not knowing how this tech works, so basically following the hype.
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
deeplink
Competition Winner
Posts: 1083
Joined: 08 Jul 2020
Location: Dubai / Cape Town

Post 05 May 2023

I'm not really sure what "hype" means, but I certainly agree this technology is remarkable and will impact how people go about things.

Specifically music related;

Publishers and main stream artists - It will certainly have an impact on copyright.

For the avid music maker, I don't know if it will change much. It certainly won't stop people from wanting to learn the guitar, and perhaps recording themselves.

I don't see much difference between sampling an old vinyl record of a string section, or finding a "string section" loop on Splice, or eventually requesting some AI to generate a string section loop. At the end of the day, being a music maker generally means imagining a sound or finding a sound that you like, and then implementing it in your music. This won't change much.

In fact, it just makes producing good music a lot more accessible. When I started, there were no huge free and accessible sound libraries full of loops. As a result, the first bits of "music" I made sounded relatively terrible. Now, I hear people starting music production and are able to get a really professional sound - mostly because of using loops and Splice etc. This is not a bad thing at all, in fact it is awesome. How many more newcomers will get a big break because the beat they made now has an AI voice generated over it.

This technology only democratizes the ability to make good music, which is exactly what the DAW did all those years ago.

For sample labels - e.g Splice, Loopmaster etc. - they may need to re-strategize. I've already seen AI-based VSTs that can take a drum loop and sonically transform it into an entitely different instrument - e.g a bongo pattern. This either means that these sample labels - with their enormous libraries - will be able to eventually generate 1000s of more new loops with a touch of button (good bye to any individual who makes sample packs), or these sample labels will become redundant in their entirety.

I do however think anyone who specifically earns an income making stock music - for the purposes of low-budget advertising/media implementation - should be concerned. Since I can see that such music could eventually be entirely generated.

For the most of us that make music for ourselves, we will just have some more tools to play with.

EDIT: Specifically Reason related: - I do foresee other DAWs (as we all know) being more quick to adopt new technology. It's unclear if these AI tools will simply remain as a VST-add on, or will be integrated natively into the DAW. If it is the latter, Reason (DAW) will need to keep up with game. Likewise with VST3, vst-midi out etc. if a new comer to music production is going to take a pick on which DAW to use, then they will generally choose the one that offers them the most flexibility, and provides them access to the latest competitive technology.
Get more Combinators at the deeplink website

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 05 May 2023

deeplink wrote:
05 May 2023
I'm not really sure what "hype" means ...
https://www.dictionary.com/browse/hype
to intensify (advertising, promotion, or publicity) by ingenious or questionable claims, methods, etc.
exaggerated publicity
an ingenious or questionable claim, method, etc., used in advertising, promotion, or publicity to intensify the effect.
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
Quarmat
Competition Winner
Posts: 467
Joined: 11 Feb 2021
Location: Europe

Post 05 May 2023

deeplink wrote:
05 May 2023

Specifically music related;

(...)
These are very good points
deeplink wrote:
05 May 2023
EDIT: Specifically Reason related: - I do foresee other DAWs (as we all know) being more quick to adopt new technology. It's unclear if these AI tools will simply remain as a VST-add on, or will be integrated natively into the DAW. If it is the latter, Reason (DAW) will need to keep up with game. Likewise with VST3, vst-midi out etc. if a new comer to music production is going to take a pick on which DAW to use, then they will generally choose the one that offers them the most flexibility, and provides them access to the latest competitive technology.
Yeah, the AI revolution is one of those "breaks" in tech history that somehow reset the panorama, so a player who might have fallen a littel behind their competitors can get on par or ahead of them by embracing the new tech. And I do hope the guys in Stockholm, who proved themselves very forward-looking and innovative in the last 25 years (ok, ok:and also stubborn and closed-minded on other issues) will embrace AI tech in the good ol' Reason way: giving fun and inspiring tools to make music, and feeling good while making it.

User avatar
bxbrkrz
Posts: 3857
Joined: 17 Jan 2015

Post 05 May 2023

Quarmat wrote:
05 May 2023
deeplink wrote:
05 May 2023

Specifically music related;

(...)
These are very good points
deeplink wrote:
05 May 2023
EDIT: Specifically Reason related: - I do foresee other DAWs (as we all know) being more quick to adopt new technology. It's unclear if these AI tools will simply remain as a VST-add on, or will be integrated natively into the DAW. If it is the latter, Reason (DAW) will need to keep up with game. Likewise with VST3, vst-midi out etc. if a new comer to music production is going to take a pick on which DAW to use, then they will generally choose the one that offers them the most flexibility, and provides them access to the latest competitive technology.
Yeah, the AI revolution is one of those "breaks" in tech history that somehow reset the panorama, so a player who might have fallen a littel behind their competitors can get on par or ahead of them by embracing the new tech. And I do hope the guys in Stockholm, who proved themselves very forward-looking and innovative in the last 25 years (ok, ok:and also stubborn and closed-minded on other issues) will embrace AI tech in the good ol' Reason way: giving fun and inspiring tools to make music, and feeling good while making it.
Image
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 05 May 2023

Quarmat wrote:
05 May 2023
I do hope the guys in Stockholm ... will embrace AI tech ...
If that happens, I think I'll move back to a hardware-only setup :puf_bigsmile:

Having said that, the way things are now, I don't think they will add such stuff to the Reason DAW (as selling stuff separately seems to be their marketing strategy). So, I'll probably just decide not to buy such rack extensions :thumbup:
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

Tiny Montgomery
Posts: 439
Joined: 22 Apr 2020

Post 05 May 2023

I'm stealing this from someone on Twitter who is discussing AI with regard to text and poetry etc but the point is relevant I think:

"AI cannot make hermeneutical judgements, it can only describe the hermeneutical judgements of human agents by reproducing them. The AI thus has no authority, it cannot for instance "interpret" a text's implications, nor could it detect allusions or ironies unless told to do so."

Further in the thread he says:

"You can use it to convert a grocery store list into a more well organised grocery store list.. All that it can do is eliminate "algorithmic mental labor" -- but it will never be able to translate intentionality or hermeneutical meaning between texts, nor can it make insights"

Tiny Montgomery
Posts: 439
Joined: 22 Apr 2020

Post 05 May 2023

Crimson is correct its mostly hype imo.

Tiny Montgomery
Posts: 439
Joined: 22 Apr 2020

Post 05 May 2023

That being said the other week I had my first experiment with ChatGPT and asked it to write song lyrics in the style of The Beatles about being in the office and one in the style of Nirvana about having flatulence and they were hilarious, especially the latter.

I also asked it transcribe an interview between Jesus Christ and Joe Rogan which actually came out quite lovely.

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 05 May 2023

Tiny Montgomery wrote:
05 May 2023
Crimson is correct its mostly hype imo.
:thumbup:
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
bxbrkrz
Posts: 3857
Joined: 17 Jan 2015

Post 05 May 2023

The new version of Midjourney that released yesterday shows how far AI has come in making commerical-level images from text alone

Here is what you get for "modern outfits inspired by Van Gogh/ Basquiat/ Monet/ Rothko, fashion photoshoot" Each one is the first try, no revisions.



Image
Image
Image

757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

User avatar
wendylou
Posts: 476
Joined: 15 Jan 2015
Location: Night City

Post 05 May 2023

How to Win Friends and Influence People by Dale Carnegie
  • The only way to get the best of an argument is to avoid it.
    You can’t win an argument. You can’t because, if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine, but what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph.
  • A man convinced against his will is of the same opinion still.
That said, A.I. is evolving rapidly with more experts sounding the alarm that it will surpass human capabilities and will be uncontrollable - if not now, soon. Rick Beato just released a YouTube video "The AI Effect: A New Era in Music and Its Unintended Consequences" in which he plays an "AI Drake" music creation that people are liking more than Drake! He notes that people won't care how or who created it, just that they like it. He posits that just like Napster and peer-to-peer sharing disrupted the music industry before, AI will be unstoppable and the records companies may end up creating their own fully AI artists to capitalize on this.

Not convinced this is evolving? In the current writers' strike, writers are concerned over A.I. competing with their writing talents. IBM confirms the company will pause hiring and replace up to 7,800 jobs with AI. Google has plans to replace jobs with A.I. Geoffery Hinton, one of the grandfathers of A.I., just quit Google so he could be free to warn against the coming dangers of AI. Many in the industry are saying it's advancing way faster than they ever imagined possible. If it isn't taken seriously now, the tsunami is coming as AI matures.




And Snoop Dog on AI risks: “Sh–, what the f—?”: https://arstechnica.com/information-tec ... hat-the-f/
:puf_smile: http://www.galxygirl.com -- :reason: user since 2002

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 05 May 2023

wendylou wrote:
05 May 2023
Many in the industry are saying it's advancing way faster than they ever imagined possible.
That depends on who you are listening to. Many in the industry (mostly real cognitive scientists) are saying this has little to nothing to do with AGI or even AI-proper, and can in no way lead to the things that are being predicted will happen. Just to give a hint: several so-called “hard problems” in cognition have been defined over the course of more than half a century of cognitive research, yet nobody in the current realm of LLMs is discussing those well-known problems. Why is that? Spoiler alert: Large Language Models and the underlying Deep Learning technology cannot solve these problems (like the Symbol Grounding problem and the Frame problem, just to name two).
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

avasopht
Competition Winner
Posts: 3975
Joined: 16 Jan 2015

Post 06 May 2023

wendylou wrote:
05 May 2023
How to Win Friends and Influence People by Dale Carnegie
  • The only way to get the best of an argument is to avoid it.
    You can’t win an argument. You can’t because, if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine, but what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph.
  • A man convinced against his will is of the same opinion still.
This 💯
crimsonwarlock wrote:
05 May 2023
That depends on who you are listening to. Many in the industry (mostly real cognitive scientists) are saying this has little to nothing to do with AGI or even AI-proper, and can in no way lead to the things that are being predicted will happen. Just to give a hint: several so-called “hard problems” in cognition have been defined over the course of more than half a century of cognitive research, yet nobody in the current realm of LLMs is discussing those well-known problems. Why is that? Spoiler alert: Large Language Models and the underlying Deep Learning technology cannot solve these problems (like the Symbol Grounding problem and the Frame problem, just to name two).
Since nobody has a clue how AGI is best achieved or approached, I'm guessing it's just that everyone has their own bets on how to best approach it, and it's best that we have diverse approaches instead of presuming it must be based on this or that.

We don't even understand human intelligence.

You've made your bet on cognitive science.

Others have made their bets elsewhere.

We may find every current approach to be unsuitable.

I would bank on the simplest system(s) cultivating emergent properties; but not solely neural networks as they currently are. If neural networks are involved, I would expect them to be just one of many tools at play and not limited to the same basic architectures.

I don't believe in just throwing neural networks at a problem. Mixing methods is how DeepMind were able to create AlphaGo et al.

As for the well-known problems in cognitive science - I think they are important because you can use it to easily rule out a potential solution while also identifying areas of improvement.

But we also know that the brain did not require any of this to be understood or "solved" by nature, and while ANNs most certainly aren't remotely close to even thinking about doing that, the emergent behaviours found in the brain are more likely to manifest in a rich interconnected system than they are from a system designed by humans to resemble how we think human cognition works.

That's not to eschew or demote cognitive science. But we're looking for behaviour we know emerges in multiple contexts, and I'd put all my chips on that approach.



I often find a certain amount of arrogance in every department. This is most certainly not the way to go forward. This is how you end up with silos that will be forever approaching a breakthrough that will always feel is merely a moment out of reach.

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 06 May 2023

avasopht wrote:
06 May 2023
Since nobody has a clue how AGI is best achieved or approached...
That is not entirely true. There is quite a lot of understanding about what is needed to achieve AGI. The real problem is that the Deep Learning school of thought tries to redefine AGI into something that fits their narrative.
avasopht wrote:
06 May 2023
But we also know that the brain did not require any of this to be understood or "solved" by nature...
Again, not really correct. Nature has implemented (through evolution) all kinds of cognitive structures that are already understood quite well in neuroscience. Again, the problem here is that the school of Artificial Neural Networks are still stating that their models are "human brain inspired" while we know very well from developments in neuroscience that ANNs have absolutely nothing in common with biological neurons.

Even the idiot mantra of "we don't understand how these LLMs work" (which is in itself not true) is now used as an argument that LLMs are human-like because we also don't know how the human brain works, which is also a blanket statement that is seriously inaccurate. Think about the OpenAI CEO stating that "GPT makes mistakes, but so do humans", trying to gaslight us into a mode of thinking where GPT is like human intelligence (it most certainly is not).
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 06 May 2023

wendylou wrote:
05 May 2023
  • A man convinced against his will is of the same opinion still.
No amount of evidence will ever persuade an idiot.
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
bxbrkrz
Posts: 3857
Joined: 17 Jan 2015

Post 06 May 2023



Evolution?
Last edited by bxbrkrz on 06 May 2023, edited 1 time in total.
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

User avatar
selig
RE Developer
Posts: 11818
Joined: 15 Jan 2015
Location: The NorthWoods, CT, USA

Post 06 May 2023

crimsonwarlock wrote:
06 May 2023
No amount of evidence will ever persuade an idiot.
How to proceeded then - do we assume anyone who isn’t persuaded to our point of view is therefore an idiot?
Not sure where this is going or (who it’s aimed at), so maybe I’ll just make a suggestion to get back to the topic at hand and hope we all can move on. :)
Selig Audio, LLC

User avatar
bxbrkrz
Posts: 3857
Joined: 17 Jan 2015

Post 06 May 2023

Can AI call itself an idiot, and laugh it off?
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 06 May 2023

selig wrote:
06 May 2023
crimsonwarlock wrote:
06 May 2023
No amount of evidence will ever persuade an idiot.
How to proceeded then - do we assume anyone who isn’t persuaded to our point of view is therefore an idiot?
Not sure where this is going or (who it’s aimed at), so maybe I’ll just make a suggestion to get back to the topic at hand and hope we all can move on. :)
It was just a reaction to oppose the line I quoted in my reply. It's an existing meme, by the way, nothing personal and not aimed at anyone specific :puf_wink:

... and NO! Someone not agreeing with you doesn't make them an idiot. However, not agreeing with a large amount of evidence... well, I hope you get the point now :puf_bigsmile:

quote-no-amount-of-evidence-will-ever-persuade-an-idiot-mark-twain-135-16-65.jpg

Funny thing, the quote is attributed to Twain, but it seems it didn't come from him.
You do not have the required permissions to view the files attached to this post.
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

User avatar
DaveyG
Posts: 2575
Joined: 03 May 2020

Post 06 May 2023

bxbrkrz wrote:
06 May 2023
Can AI call itself an idiot, and laugh it off?
Can AI laugh and mean it?

User avatar
bxbrkrz
Posts: 3857
Joined: 17 Jan 2015

Post 06 May 2023

DaveyG wrote:
06 May 2023
bxbrkrz wrote:
06 May 2023
Can AI call itself an idiot, and laugh it off?
Can AI laugh and mean it?
A phony laugh can be the best outcome to deescalate a situation between humans. AI should be able to lie while laughing, as it is a tool in evolution to avoid early annihilation :D
757365206C6F67696320746F207365656B20616E73776572732075736520726561736F6E20746F2066696E6420776973646F6D20676574206F7574206F6620796F757220636F6D666F7274207A6F6E65206F7220796F757220696E737069726174696F6E2077696C6C206372797374616C6C697A6520666F7265766572

User avatar
wendylou
Posts: 476
Joined: 15 Jan 2015
Location: Night City

Post 06 May 2023

crimsonwarlock wrote:
05 May 2023
wendylou wrote:
05 May 2023
Many in the industry are saying it's advancing way faster than they ever imagined possible.
That depends on who you are listening to. Many in the industry (mostly real cognitive scientists) are saying this has little to nothing to do with AGI or even AI-proper, and can in no way lead to the things that are being predicted will happen. Just to give a hint: several so-called “hard problems” in cognition have been defined over the course of more than half a century of cognitive research, yet nobody in the current realm of LLMs is discussing those well-known problems. Why is that? Spoiler alert: Large Language Models and the underlying Deep Learning technology cannot solve these problems (like the Symbol Grounding problem and the Frame problem, just to name two).
Two days ago Geoffrey Hinton, the godfather of AI who quit Google to speak out, essentially tells the audience that he's worried for humanity. He is the one who helped invent back-propagation in the AI model that opened the floodgates for advancing AI. He believes that AI will soon surpass humans and he's sounding the alarm but has no solution to offer. Worth watching his MIT talk from two days ago (39 min.)

:puf_smile: http://www.galxygirl.com -- :reason: user since 2002

User avatar
crimsonwarlock
Posts: 2432
Joined: 06 Nov 2021
Location: Close to the Edge

Post 06 May 2023

wendylou wrote:
06 May 2023
Two days ago Geoffrey Hinton, the godfather of AI who quit Google to speak out, essentially tells the audience that he's worried for humanity. He is the one who helped invent back-propagation in the AI model that opened the floodgates for advancing AI. He believes that AI will soon surpass humans and he's sounding the alarm but has no solution to offer. Worth watching his MIT talk from two days ago (39 min.)
Not so long ago, Hinton stated that Deep learning was NOT the way to go because the human brain does nothing that looks like back-propagation. The "other" person who invented Deep Learning, Yann Lecun (he was awarded the Turing award together with Hinton) has also moved to criticize his own invention as not being the way to AGI.

In regard to Hinton's recent moves, Nick Bostrom has shown that you can make more money with preaching AI doom, than with actually trying to solve AI-proper.
-------
Analog tape ⇒ ESQ1 sequencer board ⇒ Atari/Steinberg Pro24 ⇒ Atari/Cubase ⇒ Cakewalk Sonar ⇒ Orion Pro/Platinum ⇒ Reaper ⇒ Reason DAW.

avasopht
Competition Winner
Posts: 3975
Joined: 16 Jan 2015

Post 07 May 2023

crimsonwarlock wrote:
06 May 2023
That is not entirely true. There is quite a lot of understanding about what is needed to achieve AGI. The real problem is that the Deep Learning school of thought tries to redefine AGI into something that fits their narrative.
I've seen a few cognitive scientists on Quora share their theories of how to approach it and so on. I'm assuming you're privy to more in-depth discussions with other cognitive scientists and might have some ideas of your own (but I do recall you saying you didn't want to really speak about this stuff, so I won't pry).

However, it sounds to me like there's some tension, hostility and conflict in the whole conversation (not here, but between cognitive and computer scientists).

An us-vs-them mentality.

Little silos all trying to devalue the other.

We don't have a handle on intelligence in general, so I'm all ears on any intuitions of how AGI is best approached (got one article open in another tab).
crimsonwarlock wrote:
06 May 2023
avasopht wrote:
06 May 2023
But we also know that the brain did not require any of this to be understood or "solved" by nature...
Again, not really correct. Nature has implemented (through evolution) all kinds of cognitive structures that are already understood quite well in neuroscience. Again, the problem here is that the school of Artificial Neural Networks are still stating that their models are "human brain inspired" while we know very well from developments in neuroscience that ANNs have absolutely nothing in common with biological neurons.
What I mean is that it might not be necessary need to explicitly solve the big problems. Be aware of them. Understand them. But creating a substrate to cultivate the desired emergent behaviours and abilities is also a viable strategy.

Ideally, it makes the most sense to me for everyone to try their own approach because while there are cognitive and computer scientists alike that are confident they know the best way forward, they could both be equally wrong.
crimsonwarlock wrote:
06 May 2023
Think about the OpenAI CEO stating that "GPT makes mistakes, but so do humans", trying to gaslight us into a mode of thinking where GPT is like human intelligence (it most certainly is not).
Is that what he actually meant? Was he saying "ChatGPT makes mistakes and humans make mistakes, therefore they are equivalent"? Or was he saying that humans also make mistakes, therefore the mistakes should not be seen as a total failure?
crimsonwarlock wrote:
06 May 2023
Not so long ago, Hinton stated that Deep learning was NOT the way to go because the human brain does nothing that looks like back-propagation. The "other" person who invented Deep Learning, Yann Lecun (he was awarded the Turing award together with Hinton) has also moved to criticize his own invention as not being the way to AGI.

In regard to Hinton's recent moves, Nick Bostrom has shown that you can make more money with preaching AI doom, than with actually trying to solve AI-proper.
Obviously deep learning alone is not the way for AGI. Are any prominent experts in Deep Learning suggesting this (because it's a very naive idea)?

I also agree with your mention of "Neural Networks" being a hype term. It is. I've always preferred to think of them as differentiable *something* (like learners, etc.). This is why I think progress has slowed/stagnated. Students aren't developing an intuition quickly enough to develop their own ideas. Ditto for all of the maths involved - there are much better ways to teach it (which I've used successfully with struggling students).

But as I said, I see a lot of hostility between factions that I think has sort of spilt over into this discussion a bit.

I tend to see positions eagerly dismissed as just "hype" and "fear". This is a blinding stance.

There is hype. There are concerns. There is restraint and temperance. There is nonchalance.

I don't think human beings are intelligent enough to create AGI by understanding general intelligence, and I think we're too dumb to realize that.

I think our best bet is to focus on creating a substrate to cultivate the type of emergence we want to see - but will take insight and inspiration from cognitive science, neurology and any relevant topics.

  • Information
  • Who is online

    Users browsing this forum: CommonCrawl [Bot] and 4 guests