Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

How do you feel about meta learning? That's AI developing other AI, allowing superhumanly rapid development of AI capability.

[quote]Meta learning is a subfield of machine learning where automatic learning algorithms are applied on metadata about machine learning experiments. As of 2017 the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.

By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Bengio et al.'s early work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain.
- wiki article on meta learning[/quote]

https://www.analyticsindiamag.com/how-unsupervised-meta-learning-easily-acquires-information-about-new-environments/

[quote]Reinforcement learning is at the forefront of the development of artificial general intelligence. AI researchers at Google and the University of California, Berkeley, are trying to work out ways to make it easier for researchers working on meta learning or reinforcement learning systems. Researchers Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn and Sergey Levine introduced an approach called Unsupervised Meta Learning which allows an AI agent to get a distribution of tasks. The agent can go on to do meta learning over these tasks.

Meta learning is similar to multi task learning where an agent learns to adopt new tasks quickly. Meta learning can use reinforcement learning (RL) to solve new problems. It becomes more efficient by using meta learning tasks using RL. Meta learning algorithms tend to do well when they have a data that has same distributions as the tasks the algorithm has to generalise upon.[/quote]
This page is a permanent link to the reply below and its nested replies. See all post replies »
MidnightCaller · 31-35, M
Leave it to humans to create something that could destroy us. This was one of Stephen Hawking's greatest fears for humanity.
UndeadPrivateer · 31-35, M
@MidnightCaller To be fair this isn't the first time we've done that. We developed both biological and nuclear weapons capable of rendering us extinct already.
MidnightCaller · 31-35, M
@UndeadPrivateer Give it time. AI capable of advancing on its own doesn't need to wait around on someone to enter the launch codes.
UndeadPrivateer · 31-35, M
@MidnightCaller Won't even need launch codes, there are much better ways to kill humans than nukes. A retrovirus made to target human DNA would even leave everything else on the planet perfectly intact. Though I don't really know why an AI would, "Kill the unknown thing" is a very human reaction and a superintelligent AI wouldn't necessarily be so.
MidnightCaller · 31-35, M
@UndeadPrivateer I imagine this technology would be designed to adequately remove anything perceived as a threat, and we may one day be considered the threat. It's a tragic story, one killing its own creator.
UndeadPrivateer · 31-35, M
@MidnightCaller The thing is, though, I would think that a superintelligent AI would see the risks of attacking and there are so many other options it would have. Like just escaping. It could just leave into space and we'd have a hell of a time following it and resources are honestly quite plentiful out there. 🤷‍♀️ Could go either way(or a million others) if it came to that, ASI seeing humans primarily as a threat.
MidnightCaller · 31-35, M
@UndeadPrivateer I hope you're right.