{"id":24142,"date":"2025-03-27T07:45:05","date_gmt":"2025-03-27T08:45:05","guid":{"rendered":"https:\/\/www.premium-partners.net\/?p=24142"},"modified":"2025-03-27T12:31:56","modified_gmt":"2025-03-27T12:31:56","slug":"firms-and-researchers-at-odds-over-superhuman-ai","status":"publish","type":"post","link":"https:\/\/www.premium-partners.net\/fr\/builder\/firms-and-researchers-at-odds-over-superhuman-ai\/","title":{"rendered":"Firms and researchers at odds over superhuman AI"},"content":{"rendered":"<p>This <a target='_blank' rel=\"nofollow\" href=\"https:\/\/www.iol.co.za\/business-report\/international\/firms-and-researchers-at-odds-over-superhuman-ai-c8671451-c9c6-4984-9af2-1c7aa0094da8\">post<\/a> was originally published on <a target='_blank' rel=\"nofollow\" href=\"https:\/\/www.iol.co.za\/\">this site<\/a><\/p><p><img decoding=\"async\" src=\"https:\/\/image-prod.iol.co.za\/16x9\/800?source=https:\/\/iol-prod.appspot.com\/image\/aea78f9a70f54c19fa4cbef1736fc808cb77f817\/2000&amp;operation=CROP&amp;offset=0x223&amp;resize=2000x1125\" class=\"type:primaryImage\" \/><\/p>\n<p>Hype is growing from leaders of major AI companies that &#8220;strong&#8221; computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.<\/p>\n<p>The belief that human-or-better intelligence &#8212; often called &#8220;artificial general intelligence&#8221; (AGI) &#8212; will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.<\/p>\n<p>&#8220;Systems that start to point to AGI are coming into view,&#8221; OpenAI chief Sam Altman wrote in a blog post last month. Anthropic&#8217;s Dario Amodei has said the milestone &#8220;could come as early as 2026&#8221;.<\/p>\n<p>Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.<\/p>\n<p>Others, though are more sceptical.<\/p>\n<p>Meta&#8217;s chief AI scientist Yann LeCun told AFP last month that &#8220;we are not going to get to human-level AI by just scaling up LLMs&#8221; &#8212; the large language models behind current systems like ChatGPT or Claude.<\/p>\n<p>LeCun&#8217;s view appears backed by a majority of academics in the field.<\/p>\n<p>Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that &#8220;scaling up current approaches&#8221; was unlikely to produce AGI.<\/p>\n<p>&nbsp;<\/p>\n<h2>&#8216;Genie out of the bottle&#8217;<\/h2>\n<p>Some academics believe that many of the companies&#8217; claims, which bosses have at times flanked with warnings about AGI&#8217;s dangers for mankind, are a strategy to capture attention.<\/p>\n<p>Businesses have &#8220;made these big investments, and they have to pay off,&#8221; said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.<\/p>\n<p>&#8220;They just say, &#8216;this is so dangerous that only I can operate it, in fact I myself am afraid but we&#8217;ve already let the genie out of the bottle, so I&#8217;m going to sacrifice myself on your behalf &#8212; but then you&#8217;re dependent on me&#8217;.&#8221;<\/p>\n<p>Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.<\/p>\n<p>&#8220;It&#8217;s a bit like Goethe&#8217;s &#8216;The Sorcerer&#8217;s Apprentice&#8217;, you have something you suddenly can&#8217;t control any more,&#8221; Kersting said &#8212; referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.<\/p>\n<p>A similar, more recent thought experiment is the &#8220;paperclip maximiser&#8221;.<\/p>\n<p>This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines &#8212; having first got rid of human beings that it judged might hinder its progress by switching it off.<\/p>\n<p>While not &#8220;evil&#8221; as such, the maximiser would fall fatally short on what thinkers in the field call &#8220;alignment&#8221; of AI with human objectives and values.<\/p>\n<p>Kersting said he &#8220;can understand&#8221; such fears &#8212; while suggesting that &#8220;human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever&#8221; for computers to match it.<\/p>\n<p>He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.<\/p>\n<p>&nbsp;<\/p>\n<h2>&#8216;Biggest thing ever&#8217;<\/h2>\n<p>The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people&#8217;s attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain&#8217;s Cambridge University.<\/p>\n<p>&#8220;If you are very optimistic about how powerful the present techniques are, you&#8217;re probably more likely to go and work at one of the companies that&#8217;s putting a lot of resource into trying to make it happen,&#8221; he said.<\/p>\n<p>Even if Altman and Amodei may be &#8220;quite optimistic&#8221; about rapid timescales and AGI emerges much later, &#8220;we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen,&#8221; O hEigeartaigh added.<\/p>\n<p>&#8220;If it were anything else&#8230; a chance that aliens would arrive by 2030 or that there&#8217;d be another giant pandemic or something, we&#8217;d put some time into planning for it&#8221;.<\/p>\n<p>The challenge can lie in communicating these ideas to politicians and the public.<\/p>\n<p>Talk of super-AI &#8220;does instantly create this sort of immune reaction&#8230; it sounds like science fiction,&#8221; O hEigeartaigh said.<\/p>\n<p><strong>AFP&nbsp;<\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>Hype is growing from leaders of major AI companies that &#8220;strong&#8221; computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.The belief that human-or-better intelligence &#8212; often called &#8220;artificial general intelligence&#8221; (AGI) &#8212; will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.&#8221;Systems that start to point to AGI are coming into view,&#8221; OpenAI chief Sam Altman wrote in a blog post last month. Anthropic&#8217;s Dario Amodei has said the milestone &#8220;could come as early as 2026&#8221;.Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.Others, though are more sceptical.Meta&#8217;s chief AI scientist Yann LeCun told AFP last month that &#8220;we are not going to get to human-level AI by just scaling up LLMs&#8221; &#8212; the large language models behind current systems like ChatGPT or Claude.LeCun&#8217;s view appears backed by a majority of academics in the field.Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that &#8220;scaling up current approaches&#8221; was unlikely to produce AGI.\u00a0&#8216;Genie out of the bottle&#8217;Some academics believe that many of the companies&#8217; claims, which bosses have at times flanked with warnings about AGI&#8217;s dangers for mankind, are a strategy to capture attention.Businesses have &#8220;made these big investments, and they have to pay off,&#8221; said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.&#8221;They just say, &#8216;this is so dangerous that only I can operate it, in fact I myself am afraid but we&#8217;ve already let the genie out of the bottle, so I&#8217;m going to sacrifice myself on your behalf &#8212; but then you&#8217;re dependent on me&#8217;.&#8221;Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.&#8221;It&#8217;s a bit like Goethe&#8217;s &#8216;The Sorcerer&#8217;s Apprentice&#8217;, you have something you suddenly can&#8217;t control any more,&#8221; Kersting said &#8212; referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.A similar, more recent thought experiment is the &#8220;paperclip maximiser&#8221;.This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines &#8212; having first got rid of human beings that it judged might hinder its progress by switching it off.While not &#8220;evil&#8221; as such, the maximiser would fall fatally short on what thinkers in the field call &#8220;alignment&#8221; of AI with human objectives and values.Kersting said he &#8220;can understand&#8221; such fears &#8212; while suggesting that &#8220;human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever&#8221; for computers to match it.He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.\u00a0&#8216;Biggest thing ever&#8217;The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people&#8217;s attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain&#8217;s Cambridge University.&#8221;If you are very optimistic about how powerful the present techniques are, you&#8217;re probably more likely to go and work at one of the companies that&#8217;s putting a lot of resource into trying to make it happen,&#8221; he said.Even if Altman and Amodei may be &#8220;quite optimistic&#8221; about rapid timescales and AGI emerges much later, &#8220;we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen,&#8221; O hEigeartaigh added.&#8221;If it were anything else&#8230; a chance that aliens would arrive by 2030 or that there&#8217;d be another giant pandemic or something, we&#8217;d put some time into planning for it&#8221;.The challenge can lie in communicating these ideas to politicians and the public.Talk of super-AI &#8220;does instantly create this sort of immune reaction&#8230; it sounds like science fiction,&#8221; O hEigeartaigh said.AFP\u00a0<\/p>","protected":false},"author":1,"featured_media":24144,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-24142","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-builder"],"_links":{"self":[{"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/posts\/24142","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/comments?post=24142"}],"version-history":[{"count":1,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/posts\/24142\/revisions"}],"predecessor-version":[{"id":24143,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/posts\/24142\/revisions\/24143"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/media\/24144"}],"wp:attachment":[{"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/media?parent=24142"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/categories?post=24142"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.premium-partners.net\/fr\/wp-json\/wp\/v2\/tags?post=24142"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}