Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

How do you feel about meta learning? That's AI developing other AI, allowing superhumanly rapid development of AI capability.

[quote]Meta learning is a subfield of machine learning where automatic learning algorithms are applied on metadata about machine learning experiments. As of 2017 the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.

By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Bengio et al.'s early work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain.
- wiki article on meta learning[/quote]

https://www.analyticsindiamag.com/how-unsupervised-meta-learning-easily-acquires-information-about-new-environments/

[quote]Reinforcement learning is at the forefront of the development of artificial general intelligence. AI researchers at Google and the University of California, Berkeley, are trying to work out ways to make it easier for researchers working on meta learning or reinforcement learning systems. Researchers Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn and Sergey Levine introduced an approach called Unsupervised Meta Learning which allows an AI agent to get a distribution of tasks. The agent can go on to do meta learning over these tasks.

Meta learning is similar to multi task learning where an agent learns to adopt new tasks quickly. Meta learning can use reinforcement learning (RL) to solve new problems. It becomes more efficient by using meta learning tasks using RL. Meta learning algorithms tend to do well when they have a data that has same distributions as the tasks the algorithm has to generalise upon.[/quote]
Picklebobble2 · 56-60, M
Meaningless.
There are certain kinds of scientific applications it might be useful for.
Maybe even a few industrial.
But unless it has practical applications for the real benefit of humankind it's just another niche pseudo-scientific theory with no home.

There are thousands of folk in every nation in the world at varying stages of needing an operation to cure a physical ailment.
Why haven't we got a 'production line' of 'computer programmed surgeons' operating efficiently and speedily to reduce unnecessary suffering or even needless death ?
THAT'S practical !

Or have these 'Mega-Brains' come up with a new, non-carbon based fuel with no or little waste product ?

Or develop a type of 'plastic' that bio-degrades and so even if those bastards drop it in the ocean it will have minimal if any effect on the nature beneath it ?
UndeadPrivateer · 31-35, M
@Picklebobble2 For Fukushima or Chernobyl?
Picklebobble2 · 56-60, M
@UndeadPrivateer chernobyl ?
UndeadPrivateer · 31-35, M
@Picklebobble2 Chernobyl was a circus of errors, honestly, much of which was covered up by the KGB at the time. It was seemingly due to a combination of the neutron moderator structure being made of graphite and cooled with water(which leads to a dangerously unstable void coefficient that can cause sudden hazardous power spikes at low power operation in a very counter-intuitive fashion, this process also something the operators were entirely unaware of) combined with the design of the control rods inserted to stop a meltdown themselves having a graphite water cooled base beneath the boron carbide.(This means when they are only partially inserted they actually cause the energy production to spike.) That combined with a long history of poor operator education, KGB suppressed negligence and then some really ridiculous communication breakdowns that all lead to the accident. So there's really no single person [i]to[/i] blame.

Fukushima was just plain corporate negligence that largely lay at the feet of Masao Yoshida.
MidnightCaller · 31-35, M
Leave it to humans to create something that could destroy us. This was one of Stephen Hawking's greatest fears for humanity.
MidnightCaller · 31-35, M
@UndeadPrivateer I imagine this technology would be designed to adequately remove anything perceived as a threat, and we may one day be considered the threat. It's a tragic story, one killing its own creator.
UndeadPrivateer · 31-35, M
@MidnightCaller The thing is, though, I would think that a superintelligent AI would see the risks of attacking and there are so many other options it would have. Like just escaping. It could just leave into space and we'd have a hell of a time following it and resources are honestly quite plentiful out there. 🤷‍♀️ Could go either way(or a million others) if it came to that, ASI seeing humans primarily as a threat.
MidnightCaller · 31-35, M
@UndeadPrivateer I hope you're right.
SW-User
One word : Skynet 😱
SW-User
True very true! I am hoping the Alien Overlords will take me off this Rock first 👽️
UndeadPrivateer · 31-35, M
@SW-User What if the aliens are robots too? 🙀
SW-User
@UndeadPrivateer uh oh I didn't factor that or possibly cyborgs
SW-User
i hope they become better than us and implement communism
UndeadPrivateer · 31-35, M
@SW-User You know, ASI is one of those situations I could see communist ideas actually being applicable. One can remove the key problem, the human element.
Pfuzylogic · M
So is it safe to presume that you are involved in AI coding?
Pfuzylogic · M
@SW-User C used constructs instead of classes though.
Classes take C to a whole new level of reusability.
SW-User
@Pfuzylogic C uses structures, which are what you want for a systems language. Object-oriented programming was one of the worst ideas ever, and object-oriented programming in a language without automatic memory management is like being trapped in hell.
Pfuzylogic · M
@SW-User
I am just looking at portability back when I programmed. I wasn’t contemplating the issues of memory heaps.
SW-User
I am highly skeptical
UndeadPrivateer · 31-35, M
@SW-User Of what aspects? The capabilities of it or whether or not it is safe to use?
SW-User
@UndeadPrivateer capabilities, I don't think they're gonna get far with that approach. I just woke up and have a migraine, though, so I'm afraid I'm not up to explaining in detail.
UndeadPrivateer · 31-35, M
@SW-User Quite alright, I get you. I'm open minded on it and willing to see where the research leads. It holds incredible promise even if it can just be used for certain applications.

I hope your migraine goes away and you feel better soon. I get them on occasion too and know how much they suck.
This comment is hidden. Show Comment
UndeadPrivateer · 31-35, M
@HipYoungDude Sounds very spiritual.
Oh I understand all of this🙄
UndeadPrivateer · 31-35, M
@Mondayschild Lol, it [i]is[/i] quite technical.
DDonde · 31-35, M
It's an interesting idea from an algorithmic point of view, but I think the danger some people talk about is dramatically overstated. You have more to fear from regular people.

 
Post Comment