Showing posts with label #AI. Show all posts
Showing posts with label #AI. Show all posts

Tuesday, April 25, 2023

AI Transforming Self-Perception.

Sigmund Freud, c. 1921.
By Max Halberstadt.
Public Domain.
via Wikimedia Commons.
The Economist's essay this week discusses the rapid advancements in artificial intelligence (AI), particularly large language models (LLMs) like GPT-4. The rise of LLMs has sparked an entrepreneurial explosion, altering how people use computers and creating new business models. However, this development also presents risks, such as spreading disinformation and creating deep fakes.

The essay draws comparisons between LLMs and pivotal moments in history, such as the introduction of the web browser, the printing press, and psychoanalysis, each of which significantly altered how people access knowledge and perceive themselves. As LLMs evolve, they could fundamentally shift the way we engage with information and each other. Some researchers even envision AI entities developing unique personalities, becoming externalized versions of users' inner voices or emulating the personalities of deceased individuals.

The true nature of AI models remains a contentious issue among researchers. Some argue that these models have no real understanding and merely parrot patterns from training data ("pseudocognition"), while others believe the models possess abilities that cannot be distinguished from genuine understanding. This debate echoes Freud's concept of the uncanny, and may influence how people perceive themselves, potentially reinforcing the idea that humans are not masters of their own existence.

There are further drawbacks to the rise of LLMs. They are capable of generating plausible but false information, a phenomenon known as "hallucination" or "confabulation," raising concerns about the potential for spreading disinformation, deep fakes, and fabricated content. This challenges the integrity of public debate and highlights the need to address the negative implications of AI-generated content while leveraging its potential benefits.

To address the implications of LLMs, the article emphasizes the importance of considering AI ethics, including unconscious biases in training, the responsibilities of AI creators, and the regulation of AI upbringing. It calls for a thorough examination of human desires and motivations in relation to LLM development and the potential societal impact. As AI continues to evolve, society must prepare for both the positive and negative consequences.

Saturday, April 1, 2023

Disruption versus Destruction.

Today's edition of the New York Times caught my eye with an article by Cade Metz regarding Sam Altman, the CEO of OpenAI, and their chatbot, ChatGPT. The article weaves between Altman's biography, the product, and the nature of the firm. The article references Altman's belief in "effective altruism," a rational approach to maximizing global impact through evidence-based decision-making. The article's almost treats the life story of a CEO as a political biography, as a source to establish the legitimacy of the company and the technology.

A tech firm CEO, much like a head of state, serves as a unifying force that seeks to transcend divisions and foster a sense of identity and shared values around the company. Sam Altman's "effective altruism" is here like the "dignified" aspect of a monarchy, providing reassurance to the public and calming concerns about the impact of AI. Meanwhile, the "efficient" aspect of a tech firm, responsible for developing and implementing technology, operates on business principles necessarily different from the legitimizing elements of the firm's leadership.

While the New York Times article illustrates the role of a technology CEO in shaping public perception, it is crucial to remember that the efficient aspect of OpenAI is far different than the personality of Sam Altman. Effective altruism has little to do with the actual impact of the technology. It is vital to focus on the potential risks of disruptive new technologies, as their misuse in authoritarian hands has been destructive. "Effective altruism" could very well be serving as a distraction here, rather than a guarantee of responsible AI development.

Wednesday, September 28, 2022

Space Rendezvous.

 One of the most technically difficult and impressive acts humanity has ever achieved is the rendezvous of spacecraft in orbit. Interestingly, the American and Soviet approaches to the problem were different; while astronauts until quite recently controlled the process manually (Buzz Aldrin's Ph.D thesis concerned how to do so), since 1967 the Soyuz spacecraft have been able to do so autonomously. The Igla system, followed by Kurs, paved the way for autonomous resupply of Mir space stations at a fraction of the cost of manned spaceflight, a feat only matched by the Dragon capsules of SpaceX as recently as 2012. 

From the Economist.
This American tendency to rely upon a skilled human operator also found its way into the distinction between earlier Airbus and Boeing aircraft, with the European consortium leaning towards fly-by-wire, systems with the then-Seattle based manufacturer only following later. 

I mention this point today in light of the rapid perceived advances in AI (artificial intelligence) and ML (machine learning). Much ink has been spilled regarding recent advances in language processing and image creation (this article is a nice example from the Economist). 

But such technologies have long existed in some of the most challenging engineering spaces faced by humankind. Is the current surprise regarding AI/ML more due to the fact that it is now able to address the routine activities formerly used to sideline (if not belittle) the technology, when the practical application of the same has long since progressed past triviality to indispensability? 

Previous posts on Economist issues:

  1. Nordic Success.
  2.  @TheEconomist (Ann Wroe?) on Dr. Robert McClelland and #JFK.
  3. Further Reading.
  4. Where Newspapers Are Headed ...
  5. @TheEconomist on a hybrid #VirtualParliament.
  6. @TheEconomist on #Homelessness in @SFGov.
  7. The Life Pressed Out.
  8. Why Travel Matters.
  9. @econbartleby and @billswindell at @TheEconomist and @NorthBayNews, respectively.
  10. @AmExperiencePBS @RobertKenner-- the 1918 Pandemic.
  11. The Return of #Cash.
  12. California, where Malala Yousafzai becomes Janet Yellen.
  13. The Plutonium Standard.
  14. Beikoku and Eikoku.
  15. Secession is a bad idea, full stop.
  16. QE4.
  17. Brown, Budgets, Prisons, and Contempt.
  18. Executive Orders.
  19. #rebeccapurple.
  20. The Streets Should Fit the Trees.
  21. @TheEconomist on Alcohol and Health.
  22. What Do Bubbles Look Like, Pt. 2.
  23. "Bringing Up Baby Bilingual"
  24. Freshman Teams, Student Performance, and the Case For SVUSD's Master Plan.
  25. Dual Immersion Enhances Attention.
  26. Trust Levels of News Sources.
  27. Slouching Towards Utopia.