Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

How do you feel about meta learning? That's AI developing other AI, allowing superhumanly rapid development of AI capability.

[quote]Meta learning is a subfield of machine learning where automatic learning algorithms are applied on metadata about machine learning experiments. As of 2017 the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Flexibility is important because each learning algorithm is based on a set of assumptions about the data, its inductive bias. This means that it will only learn well if the bias matches the learning problem. A learning algorithm may perform very well in one domain, but not on the next. This poses strong restrictions on the use of machine learning or data mining techniques, since the relationship between the learning problem (often some kind of database) and the effectiveness of different learning algorithms is not yet understood.

By using different kinds of metadata, like properties of the learning problem, algorithm properties (like performance measures), or patterns previously derived from the data, it is possible to learn, select, alter or combine different learning algorithms to effectively solve a given learning problem. Critiques of meta learning approaches bear a strong resemblance to the critique of metaheuristic, a possibly related problem. A good analogy to meta-learning, and the inspiration for Bengio et al.'s early work (1991), considers that genetic evolution learns the learning procedure encoded in genes and executed in each individual's brain.
- wiki article on meta learning[/quote]

https://www.analyticsindiamag.com/how-unsupervised-meta-learning-easily-acquires-information-about-new-environments/

[quote]Reinforcement learning is at the forefront of the development of artificial general intelligence. AI researchers at Google and the University of California, Berkeley, are trying to work out ways to make it easier for researchers working on meta learning or reinforcement learning systems. Researchers Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn and Sergey Levine introduced an approach called Unsupervised Meta Learning which allows an AI agent to get a distribution of tasks. The agent can go on to do meta learning over these tasks.

Meta learning is similar to multi task learning where an agent learns to adopt new tasks quickly. Meta learning can use reinforcement learning (RL) to solve new problems. It becomes more efficient by using meta learning tasks using RL. Meta learning algorithms tend to do well when they have a data that has same distributions as the tasks the algorithm has to generalise upon.[/quote]
This page is a permanent link to the reply below and its nested replies. See all post replies »
Picklebobble2 · 56-60, M
Meaningless.
There are certain kinds of scientific applications it might be useful for.
Maybe even a few industrial.
But unless it has practical applications for the real benefit of humankind it's just another niche pseudo-scientific theory with no home.

There are thousands of folk in every nation in the world at varying stages of needing an operation to cure a physical ailment.
Why haven't we got a 'production line' of 'computer programmed surgeons' operating efficiently and speedily to reduce unnecessary suffering or even needless death ?
THAT'S practical !

Or have these 'Mega-Brains' come up with a new, non-carbon based fuel with no or little waste product ?

Or develop a type of 'plastic' that bio-degrades and so even if those bastards drop it in the ocean it will have minimal if any effect on the nature beneath it ?
UndeadPrivateer · 31-35, M
@Picklebobble2 Those would all be lovely applications for advanced AI that I think are probably all viable. Have you seen the AI designed car frames? They're pretty neat, some of the forms almost remind me of spider webs.
Picklebobble2 · 56-60, M
@UndeadPrivateer I re-read your piece and think that the 'practical applications' will come [i]after[/i] the 'learning processes' have been 'perfected'.
But THAT'S the worry !
It makes sense that A:I takes the place of split-second human decision making. Since (theoretically) A:I's response times would be quicker and 'disaster averted' as a result !
But when you see glitches like the one for O2 today that wiped an entire network out for an entire DAY, would you [i]really[/i] want to trust a 'non-human' decision maker at the point of criticallity ??
UndeadPrivateer · 31-35, M
@Picklebobble2 I think the ex-residents of Chernobyl and Fukushima are well aware of human errors at points of criticality. 😅
Picklebobble2 · 56-60, M
@UndeadPrivateer Did they ever admit what the fault was and who it lay with ??
UndeadPrivateer · 31-35, M
@Picklebobble2 For Fukushima or Chernobyl?
Picklebobble2 · 56-60, M
@UndeadPrivateer chernobyl ?
UndeadPrivateer · 31-35, M
@Picklebobble2 Chernobyl was a circus of errors, honestly, much of which was covered up by the KGB at the time. It was seemingly due to a combination of the neutron moderator structure being made of graphite and cooled with water(which leads to a dangerously unstable void coefficient that can cause sudden hazardous power spikes at low power operation in a very counter-intuitive fashion, this process also something the operators were entirely unaware of) combined with the design of the control rods inserted to stop a meltdown themselves having a graphite water cooled base beneath the boron carbide.(This means when they are only partially inserted they actually cause the energy production to spike.) That combined with a long history of poor operator education, KGB suppressed negligence and then some really ridiculous communication breakdowns that all lead to the accident. So there's really no single person [i]to[/i] blame.

Fukushima was just plain corporate negligence that largely lay at the feet of Masao Yoshida.