Can artificial intelligence be able to perceive?
For months, Mr. Lemoine has been arguing with Google managers, executives, and human resources over his startling claim that the company's language model for dialog applications, or LaMDA, has consciousness and soul. Google says hundreds of its researchers and engineers chatted with LaMDA, an internal tool, and reached an unrelated conclusion to Mr. Lemoine. Most A.I experts. Everyone believes that the industry is far from having the ability to feel computers. Some AI researchers have long made optimistic claims that these technologies will soon gain interest, but many others are extremely abrupt to dismiss such claims. "If you were using these systems, you would never say such things," said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who is exploring similar technologies.
You can check out New York Times Further Queries>>
While chasing A.I. The predecessor, Google's search organization, spent several years mired in scandals and controversies. Divisional scientists and other employees admit faithfully feuded over personnel matters and technology issues in episodes that in episodes in that admit regularly spilled in the direction of the public arena. In March, Google fired a researcher who sought to publicly disagree with the published work of two of its colleagues. And the testimony of two A.I. Ethics researchers, Timnit Gebru and Margaret Mitchell, after criticizing Google's language models, further cast a shadow over the group.
Mr. Lemoine, a military veteran who describes himself as a priest, ex-convict, and A.I. researcher, gives facts about google executives as senior as kent Walker, the chief executive officer of global affairs, that he believed lamda was a child of 7 or 8 dotages old. He wanted the club to seek the brain program’s consent ahead of functioning experiments on it. His claims are alive and founded on his religious beliefs, which he said the company’s human resources department discriminated against.
The
Engineer edited a transcript of conversations at some point
asking what the AI system was afraid of. The exchange is eerily reminiscent
of the 1968 sci-fi film 2001: A Space Odyssey, where the artificial
intelligence computer HAL 9000 refuses to obey a human operator for fear of
shutting it down. “Google can call this shared private property. I call this a
conversation I had with one of my colleagues,” Lemoine quoted a transcript of
the conversation in a tweet.
Feel free to check more out The Guardian>>
“They retain repeatedly
questioned my sanity.” Mr. Lemoine said. “They said, ‘Have you last checked out
by a psychiatrist recently?’” In the months before he was placed on
administrative leave, the company offered him a mental health break.
A.I. Director Yann LeCun Meta researcher and key figure in the growth of neural networks, said in an interview this week that these types of systems aren't powerful enough to achieve true intelligence. Google technology is what scientists call neural networks, which are mathematical systems that learn skills by analyzing large amounts of data. For example, you can learn to recognize a cat by identifying patterns in thousands of cat pictures.
<<Scientific American also gives a brief about Google AI chatbots>>
Over the past few years, Google and
other major companies have developed neural networks that can learn from vast amounts of prose, including unpublished books and
thousands of Wikipedia articles. This "large language model" can be
applied to many problems. You can also summarize articles, answer questions,
create tweets, and even write blog posts.
But
they are very harmful. Sometimes they produce perfect prose. Sometimes they
generate nonsense sounds. Systems are very good at re-creating patterns they
have seen in the past, but they cannot reason like humans.