Check Out!Follow us on Google NewsFollow
Breaking News
Loading...

Google suspend engineer claiming their AI is intelligent

Can artificial intelligence be able to perceive?

SAN FRANCISCO - Google recently laid off an engineer with pay after dismissing his claims that its artificial intelligence is sentient, raising another blow to cutting-edge technology company's best. Blake Lemoine, senior software engineer at Google's responsible AI organization, said in an interview that he was put on leave on Monday. The company's human resources department said he violated Google's privacy policy. The day before the suspension, Mr. Lemoine said he had forwarded documents to the office of a US senator, saying they had provided evidence that Google and its technology engaged in discriminatory behavior. religious treatment. Google says its system mimics chat exchanges and can talk about different topics but has no awareness. “Our team – made up of ethologists and technologists – reviewed Blake’s concerns in light of the A.I. Principles. us and advised him that the evidence did not support his claim," Brian Gabriel, a Google spokesman, said in a statement. "Some in the broader A.I.  are looking at the long-term possibility of perceptual or general artificial intelligence, but it doesn't make sense to do so by anthropomorphizing today's conversational models that aren't inherently non-existent. conscious." The Washington Post first reported Mr. Lemoine’s suspension.

ByteHook

For months, Mr. Lemoine has been arguing with Google managers, executives, and human resources over his startling claim that the company's language model for dialog applications, or LaMDA, has consciousness and soul. Google says hundreds of its researchers and engineers chatted with LaMDA, an internal tool, and reached an unrelated conclusion to Mr. Lemoine. Most A.I experts. Everyone believes that the industry is far from having the ability to feel computers. Some AI researchers have long made optimistic claims that these technologies will soon gain interest, but many others are extremely abrupt to dismiss such claims. "If you were using these systems, you would never say such things," said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who is exploring similar technologies.  

You can check out New York Times Further Queries>>


While chasing A.I. The predecessor, Google's search organization, spent several years mired in scandals and controversies. Divisional scientists and other employees admit faithfully feuded over personnel matters and technology issues in episodes that in episodes in that admit regularly spilled in the direction of the public arena. In March, Google fired a researcher who sought to publicly disagree with the published work of two of its colleagues. And the testimony of two A.I. Ethics researchers, Timnit Gebru and Margaret Mitchell, after criticizing Google's language models, further cast a shadow over the group.


Mr. Lemoine, a military veteran who describes himself as a priest, ex-convict, and  A.I. researcher, gives facts about google executives as senior as kent Walker, the chief executive officer of global affairs, that he believed lamda was a child of 7 or 8 dotages old. He wanted the club to seek the brain program’s consent ahead of functioning experiments on it. His claims are alive and founded on his religious beliefs, which he said the company’s human resources department discriminated against.


The Engineer edited a transcript of conversations at some point asking what the AI ​​system was afraid of. The exchange is eerily reminiscent of the 1968 sci-fi film 2001: A Space Odyssey, where the artificial intelligence computer HAL 9000 refuses to obey a human operator for fear of shutting it down. “Google can call this shared private property. I call this a conversation I had with one of my colleagues,” Lemoine quoted a transcript of the conversation in a tweet.

 Feel free to check more out  The Guardian>>


They retain repeatedly questioned my sanity.” Mr. Lemoine said. “They said, ‘Have you last checked out by a psychiatrist recently?’” In the months before he was placed on administrative leave, the company offered him a mental health break.

A.I. Director Yann LeCun Meta researcher and key figure in the growth of neural networks, said in an interview this week that these types of systems aren't powerful enough to achieve true intelligence. Google technology is what scientists call neural networks, which are mathematical systems that learn skills by analyzing large amounts of data. For example, you can learn to recognize a cat by identifying patterns in thousands of cat pictures.

<<Scientific American also gives a brief about Google AI chatbots>>

Over the past few years, Google and other major companies have developed neural networks that can learn from vast amounts of prose, including unpublished books and thousands of Wikipedia articles. This "large language model" can be applied to many problems. You can also summarize articles, answer questions, create tweets, and even write blog posts.

But they are very harmful. Sometimes they produce perfect prose. Sometimes they generate nonsense sounds. Systems are very good at re-creating patterns they have seen in the past, but they cannot reason like humans.

أحدث أقدم