(Bloomberg) — author Nassim Taleb said investors should insure against a stock-market crash as structural issues such as the US debt burden threaten to derail an otherwise unstoppable rally.
Even with US stocks making multiple record highs and corporate profits surging, Taleb, a distinguished scientist for hedge fund Universa Investments, warns that the real danger now comes from visible risks, so-called “white swans,” that most people ignore until it’s too late.
Black Swan author Nassim Taleb says the steep plunge in Nvidia exposes the equity market’s fragility, while warning of more losses ahead. Speaking to Bloomberg’s Sonali Basak on the sidelines of Miami Hedge Fund Week, Taleb says a drawdown two or three times bigger than Monday’s 17% selloff is “absolutely in line” with what the market should expect.
Nassim Taleb, Black Swan author and Universa Investments distinguished scientific advisor, talks about the fragility of markets, how to hedge against geopolitical risks and artificial intelligence. He’s on “Bloomberg Markets.”
Workshop organized by the Real World Risk Institute. The workshop is an intense 10-day online program, and the 18th edition took place from July 10-21, 2023.
This video discusses the capabilities and limitations of large language models like GPT, the challenges of setting constraints on AI systems, and the potential risks and consequences of AI decision-making. The video talks about:
The concept of a “stochastic parrot” in language processing and machine learning.
How language processing systems like GPT use data from the web to generate responses.
Attempts to “trick” GPT with questions requiring nuanced understanding.
The simple operation of GPT in predicting the next word in a sequence.
The use of language models as a new interface to computers.
The integration of GPT with Wolfram Alpha for computations and informed responses.
The similarity between writing good prompts for GPT and expository writing.
The training data for GPT, includes nonsense, fiction, and factual information.
The problem of the “self-licking lollipop” in information sources.
The concept of “necessarily human work” requires human choice and input.
The potential for AI to make decisions and the challenges of setting constraints.
A thought experiment called “promptocracy” for AI decision-making.
The actuation layer of AI and the difficulty of setting constraints.
The phenomenon of computational irreducibility and trade-offs in AI computation.
The potential risks of AI decision-making and the need for understanding large language models.