#1: The Boat, Kandinsky 7.0, and ChatGPT 42.0 or Value-Meaning Journeys
Truth, when expressed in form (for example, written, spoken..), ceases to be such, including what I am writing... exceptions usually include metaphors, parables, fairy tales, proverbs... music
By reading this training-article, the reader:
- will do mental gymnastics, which may increase awareness and, consequently, bring more freedom into their thinking and, as a result, into their life
- periodically, it seems, the training-article is worth rereading, making notes, trying to understand the meanings expressed and the connections between them
…will also have an understanding:
- of a simple model of society
- of mental models
- of the fundamental laws of society
- of AI and its influence on the future
- whether it’s possible to create strong AI with neural networks
- of the most important question, without solving which our prospects are, unfortunately, rather vague
From the Author
The training-article was written with a deliberately “jagged pace,” containing numerous open questions, with a wealth of obvious and hidden meanings and connections, and sharp transitions. I assume one may “get lost” due to the overload of meanings. After taking a break, you can try again. For those who have experience working with koans, it might be worth approaching this article in the same way as a koan, and reading it as working with a koan. If you have any questions, my contact details are below.
Lecture — Presentation:
Like everyone, you are born into chains. Born into a prison you cannot taste or touch. A prison for your mind.

Context of this Article
It is obvious that AI “has arrived,” at least AI models. The beginning of the age of AI, as Bill Gates stated in his article from 03.2023, where he describes his vision and expresses hopes for solving humanity’s old and new problems with “new tools.” There are many articles and other materials on this topic. Since I work in this field myself and have my own vision, I wrote part of this article in 03-05.2023. Due to personal circumstances, I finished and am publishing it only now. In addition to AI, the article raises what I see as the main question, which I will reveal in subsequent publications. I also talk about “future scenarios,” so I think that 10-12 months after writing, it will become clearer if my reflections were wrong.

The errors of people with strong minds are particularly frightening because they become the thoughts of many other people.N.G. Chernyshevsky
Written: 30.05.2023; Edited: 26.05.2024.
Introduction
All my articles are numbered, and the index number holds significance. My articles raise philosophical-practical questions and are intended for thoughtful, unhurried reading. The goal is to increase awareness, to provoke thought: perhaps I am raising questions more than answering them. The article "#0: Metaphorical Autobiography or the Professor’s Bunker Stories" serves as an introduction, about the author.
Why I am writing this article and why now
I felt the desire to express my thoughts in this way, and I hope it is interesting and/or useful to someone.
Millions of ordinary people and “beings” have been impressed by the capabilities of AI models for several years now, with a growing level of amazement, and are (perhaps with some caution) eagerly awaiting the release of the next versions of well-known AI products. Many competent and/or famous people have urgently suggested taking a 6-month pause in AI development.
Some colleagues in the “field” offer their vision of the impact that the results of their and our collective work have on society (for example, OpenAI article). A Facebook post by an experienced data engineer, who shared his thoughts on this article, caught my attention.
He was extremely surprised by the way events were unfolding and recalled his 2015 conversation with a taxi driver in Tbilisi, where he asked the driver what he would do when soon cars would be driving themselves. He found it surprising that AI technologies would first impact high-paying jobs.
I find this ironic and worthy of deeper understanding, that a large number of smart people around the world, with good salaries, are working on something whose consequences they themselves do not understand. In this context, the well-known scandal at OpenAI, involving the dismissal of CEO Sam Altman and his return, fits organically: it is ironic that people who cannot foresee the consequences of their own decisions within a single company take it upon themselves to “make the world better by creating strong AI”. In this context, it is also appropriate to recall Elon Musk’s “renunciation” of his “brainchild” and the ironic transformation of OpenAI into something not so “Open”. This is a huge contrast to the past: a cobbler knew that if he put in a certain amount of time and effort, there would be shoes, and so on. We could compare our activities to that of experimental scientists, but in this case, the scale of the resources invested (financial, human, etc.) is enormous compared to lone scientists/groups of the past, considering that we do not understand the consequences of the final product. This is what I would like to focus on in the article. I also want to raise a question that I consider the most important.
A detailed study of individual organs teaches one to forget the life of the whole organism.V. O. Klyuchevsky
What I Want to Share
I will present a simple mental model, through metaphors, which can be used to model socio-economic dynamics.
Who I’m Writing For
I believe my articles may be of interest to “architects”, people who operate with meta-mental (and more complex) models. Additionally, perhaps to those who are not accustomed to using varied mental models to understand ongoing processes.
What They Might Find Useful in This Article
The reader may discover a different way to think about reality. Accordingly, this could open up the opportunity to see other cause-and-effect relationships.
About the Author
I will provide some facts that have influenced the formation of my mental models.
Foundation: It can be said that “every 3 to 6 months, I moved and adapted to another society/country” (on an unconscious, and later on a conscious level). For me, things like health, clean air, access to water, food, electricity, a place to sleep (not implying sleep in silence, that’s already a luxury), a quiet place for solitude, peace, having close people; clothing, etc.—all of this is a luxury. I’ve changed many social roles. Practiced various spiritual practices. Experienced communication with people from different social strata. I am also a mathematician and have loved numbers since childhood.
Places Lived (more than 2 months): USSR: Mykolaiv, Komsomolsk-on-Dnipro, Beryslav; Ukraine: Mykolaiv, Kramatorsk, Komsomolsk-on-Dnipro, Beryslav; Israel: Kibbutz Yagur, Nesher, Haifa; Switzerland: Lugano; France: Rennes; Greece: Athens; Germany: Berlin. I’ve been to many places.
PhD, topic: Spectral Multi-Modal Data Analysis Methods; scientific advisor: Prof. Michaеl M. Bronstein.
Bon Voyage, Captain!

The metaphor of steering a ship resonates with me because I have had some experience managing a fishing vessel in the Mediterranean Sea. I will continue this metaphor in the context of steering and modeling society/socio-economic systems. Let’s imagine that all of humanity is sailing on a ship. There are two helmsmen: the Collective Conscious and the Collective Unconscious. But that’s not as interesting: let’s imagine that You, my reader, are also a captain, but of your own ship (Fig. 2), which in some way influences the voyage of the large ship from Fig. 1.
First, let’s describe the laws and rules of motion. For easier memorization and comprehension, these laws will be explained in simple terms. I formulated all the laws myself throughout my life, and they reflect my personal experience, knowledge, and observations. Please use everything I “teach” wisely and for good.
I hope, dear reader, that you found it interesting to spend some of your precious life with “me” while reading this article.
0. The Law of "Divine Mystery"
There are events and concepts that do not fit into any law... except the Divine Mystery.The Principle of "Occam's Razor"
Do not explain something in a complex way if it can be explained simply. If it can be explained more simply without resorting to "conspiracy theories," use that explanation. However, this does not mean that a certain "conspiracy theory" is necessarily wrong.1. The Law of "Simplification" or "Parameterization"
Any phenomenon can be simplified into its components, and after that, you can “work with them.” In mathematical terms: it can be parameterized by finding a “projection” onto a suitable “coordinate system” for the task. Then, you can deal with this limited number of parameters.
- Animate: Any person is a “cosmos” and is very complex, so it is easier to group people by “classes” using characteristics significant for the task (5PFQ, XYZα, Archetypes, Temperament, Zodiac Signs, Socionics, Human Design, Intelligence Quotient, 9 Types of the Enneagram, Marketing Personas, Castes 1, Castes 2, Metaphysical Classes and so on).
- Inanimate: A phenomenon can be modeled as a Riemannian manifold — an object where at every point there is a “tangent plane with a coordinate system”; then, “project” everything onto this “plane” and use standard mathematical tools there for analysis and manipulations (“surface” of a picture, space of all pictures, a 3D object, space of 3D objects, graph, space of all comments, space of all texts, space of all “artistic styles,” space of all “music styles,” space of all songs, space of all audio signals produced by an artist/musical instrument, space of all smooth functions, space of all operators, and so on).
Justification: Riemannian geometry and differential geometry, orthogonal functions: Fourier functions, spectral decomposition and others; for example, the existence of the science of psychology, which can find patterns and “catalog personalities”; the existence and “successful” application of “scripts” used by fraudsters; the existence and use of pickup techniques for seduction, and so on.

“Well, we can “simplify”/“parameterize”: but why does this allow us to make any conclusions regarding people? Is a person predictable?
2. The Law of "The Matrix" or "Oblomov-lite"
We often think and, accordingly, act "automatically", "by templates", implementing certain "scenarios". Like a car, we can automatically switch between different, perhaps very complex "scenarios"/"templates" to solve familiar tasks without realizing it.Any action or thought beyond existing experience usually causes anxiety, doubts, and requires more effort compared to the habitual. This requires discipline and certain habits, which not all people have developed.
Mindfulness is a tool in overcoming “automation” and “programming” (a bit about mindfulness here, learn for free here).
- Contributes: any monotonous activity, “routine”, certain professions: cashier, assembly line worker, etc.
- Does not contribute: activities requiring critical thinking, influencing people (e.g., salespeople, politicians, marketers, scammers), sabotage and espionage, creative work, investing, etc.
Justification: automatic thoughts, life scripts, a short article with links to scientific studies on automaticity (for example, this one or this one), “Thinking, Fast and Slow”, G. Gurdjieff, and others.
I have my personal observation of what percentage of people or what portion of tasks we have “templated” or “automated”, which I use for myself; I am not aware of scientific works on this topic, so I will not write these numbers here; I’ll just say that I use something similar to the “80/20” law.
Thinking is difficult, that's why most people judge.C.G. Jung
So, a person is, to some extent, “predictable”, “automated”, and by being mindful, this Law of “The Matrix” can be “broken”. Nowadays, “mindfulness” is “trending”: many meditate, attend yoga sessions, participate in various trainings, read books, go to psychologists, of which there are plenty, and more. Does this mean that the law is not a law and it barely works?
3. The Law of "Echo in the Mountains"
It is a misconception to think of mindfulness as some "pointed" action — "being in the moment", because mindfulness has a "shape", one could say a "radius". The "radius of mindfulness" can be "defined" as "the distance at which we hear the echoes of our actions, of 'ourselves'… and, in more advanced forms, of our thoughts": like the echo of your voice in the mountains; some are deaf, some hear their echo a couple of times, and some, perhaps, hear the "reflection" of their voice from the cliffs for quite a long time.In other words, the “radius of mindfulness” is how deeply we perceive cause-and-effect relationships and ourselves/our influence in this space at any given moment and in different contexts.
I see, so what many are “working on”, where “they are heading” in their practices, is, as a rule, misguided and allows them to “calmly” continue doing what they would have done anyway. But what about the practices — aren’t they supposed to help people understand this and other things?
4. The Law of "Controlling the Opposition/Right-Wing"
Any practice (e.g. Vipassana, Wim Hof breathing techniques, silence, Satori, reality testing, psychoanalysis from different schools, working with koans, use of pills/mushrooms/herbs, various types of yoga, therapies, and so on) is a tool. It can be a very powerful tool.
Every tool is created to solve specific tasks. Many practices-tools imply the preservation of the “status quo” when practiced; that is, they are specifically designed to ensure that certain things, especially in the thinking of the practitioner, do not change.
This law, I think, is easier to understand for people from “structures”, sabotage units, political strategists, and others: to control those who disagree with any system (and such people will always exist according to the Laws of the “Shooting Stick” (confirming their existence) and “Bell Curve” (describing the number of such people)), it is more beneficial to lead these movements, control their leadership; fund, monitor, and control all their actions: protests, Telegram channels, etc.
So one of the ways to preserve the “status quo”, especially in thinking, is to promote and encourage “favorable practices, meditation, the concept of mindfulness, yoga…”? What other techniques are used?
5. The Law of "The Girl in the Red Dress"
Those with children know that their attention can be directed towards something else so they “switch” from something “undesirable”. It’s similar with us: a “narrative” is created for us so that we don’t direct our attention to things we “shouldn’t be paying attention to”; as a result, we don’t get “inappropriate questions or actions” in our heads.

6. The Law of "The Driven Horse"
Our socio-economic system is generally designed in such a way that we have as little time as possible to “look around”, so that “unnecessary thoughts or questions” do not “come to mind”.
7. The Law of "Attention"
Where your attention is, there you are.
We mistakenly think that money, time, and so on are the main resources. Perhaps the most important resource for a person is their attention (this is where I should transition to the soul—the foundation of everything—but I will stick to “secular rhetoric”). Next comes energy. This is why “influencers” and others make money. Where there is a lot of attention from people, there is their energy, which, in certain contexts, can be converted into monetary units, for example.
This is probably the second most important law.
Why is control of attention important?
8. The Law of "The Pink Elephant"
A person’s “world” is limited by their experience in the broadest sense of the word. What has not been “noticed” by a person’s attention does not exist for them in their consciousness.
If a person has never heard or seen “pink elephants”, they cannot ask to enlist a “pink elephant” in their “army”.
Any change begins with a thought about an idea… and giving it attention.
9. Einstein's Law on Solving Problems and System Analysis
"We cannot solve our problems with the same thinking we used when we created them. We need to move to a higher level to solve them."Albert Einstein
The same applies to system analysis: you cannot analyze something of which you are a part. First, you need to stop being a participant in the system. That’s why external specialists are invited: to analyze and solve relationship problems, conflicts, personal psychological issues, and so on, as they are not participants in this “dynamic system”/“external observers”.
10. The Law of "Who Calls Names, Is the One to Blame"
Projection in psychology. “Good people” cannot, as a rule, imagine the intricacy of the thoughts and intentions of “bad people”. They judge others based on their own experience.
11. The Law of "Dao"
When “softness” appears, so does “hardness”,
when “lightness” appears, so does “heaviness”…
One cannot exist without the other.
12. The Law of "The Titanic"
Even if you “see the iceberg ahead”, there is “inertia” that prevents an immediate “stop”. “Inertia” exists both in an individual (for example, a person who has smoked for a long time cannot, as a rule, quit immediately… or instantly adopt a new habit—“change the course of their ship”) and in a large socio-economic system.
13. The Law of "The Prophet" or "Disruptive Innovation"
If you’re doing something that changes the system (primarily shifting people’s thinking or focusing attention on what “should not” be noticed), you’ll start having “problems”. These “problems” grow in proportion to the “seriousness of the threat to the system” (for example: cancellation, discreditation, intimidation of you and/or your loved ones, loss of livelihood, “strange coincidences”, physical death, and whatever else “opponents”-“system beneficiaries” can imagine).
On the other hand, if you are “rewarded” as a result of your actions, generally speaking, you’re at least not interfering with the “system”. This is why so-called “Disruptive Innovations” are often cultivated as an “illusion”, making the system more resilient and giving it “anti-fragility” (regarding “anti-fragility”, I won’t make any definitive statements, it needs deeper thought).
14. The Law of "People Don’t Change"
People are like people. They love money, but that has always been the case… Humanity loves money, no matter what it’s made of: skin, paper, bronze, or gold. Well, they’re frivolous… so what… mercy sometimes knocks on their hearts too… ordinary people… in general, they resemble their predecessors… it’s just the “housing issue” that spoiled them… A quote from Woland in the novel Master and Margarita by Mikhail Bulgakov
People do change, but the “closer to the core of a person, the harder and slower those changes happen.”
15. The Law of "The Gun that Fires Once a Year"
Any event, even extremely unlikely (for example, winning the lottery, etc.), will almost “certainly” happen with a large enough number of “attempts”.
The number of such events will usually be described by the Bell Curve, as per Law 16 (although there are specific methods to calculate the expected value of “rare variables”).
Explanation: The Law of Large Numbers, Expected value of binomial distribution (with a sufficiently large n), The Infinite Monkey Theorem.
16. The Law of the "Bell Curve"
If we look at the distribution of (most) things or events, it will resemble a “bell curve”, though with its own mean and variance (for example: human height distribution, distance between eyes, number of civilian casualties during an assault, the number of wounded in an attack with different amounts of artillery, IQ distribution, horse weight, etc.).
Explanation: Naturally, I’m referring to normal distribution, Student’s distribution, and central limit theorems.

17. The Law of "80/20"
“20% of the effort yields 80% of the result, and the remaining 80% of the effort yields only 20% of the result.”
What is not covered by the normal distribution is often covered by the Pareto distribution (for example, wealth distribution, word frequency in language, and more).
Explanation: Pareto distribution, Pareto principle, Pareto curve.
18. The Law of "Large Volume"
Any small change in a large system leads to significant absolute changes.
For instance, if there is a small hole in a pipe that transports oil, if the pipe transports 10 liters of oil per day, the losses will be minimal. However, if it transports millions of liters, the losses will be significant.
Another example: Suppose a Starbucks branch in California determined through an A/B test that switching the positions of cappuccino and americano on the menu increases profit by 0.03%. Now, assume the global revenue of the company is $32 billion. Implementing this change globally could result in an additional $9.6 million in revenue.
19. The Law of "The Terminator"
In any mechanical or “template-based” activity, machines will outperform living beings (for example, humans).
20. The Law of "Boiled Frog" (Special case: "Overton Window")
Gradual changes are easier to accept than sudden ones; the sign of change, whether positive or negative, is irrelevant. Often, changes may even go unnoticed.
21. The Law of "Weak Link" or "Bottleneck"
“A chain is only as strong as its weakest link.”
In any closed system (for example, manufacturing, sales, pipelines, automobiles, armies, logistics, etc.), there is a “bottleneck” (BN) that limits the flow of resources through the system. The functioning of the system can be visualized as the flow and transformation of resources. Identifying and eliminating the BN optimizes the system’s operation.
Additionally, the strength of a system is determined by its weakest component.
Attention: the theory of constrained systems/BN only partially applies to humans or, more generally, Socio-Economic Systems (SES). In SES dynamics, there are often “funnels” or “magnets”, and spirals and fractals of spirals and funnels are common.
Explanation: “Theory of Constraints”
22. The Law of "Whirlpool"
In a person’s sphere of influence, there may be “whirlpools”. You can imagine this as your ship sailing in an area with strong “currents” caused by one or more “whirlpools”. This metaphor explains why it’s so difficult and sometimes nearly impossible to escape a context in certain situations (for example, when people take on a social role, get involved in “ideologies/cults/riots”, etc.).
I will write more about this in the article on complex mental models; there it will become clear why the theory of BN doesn’t apply.
23. The Law of the "Flock Magnet"
It is difficult to go against everyone or the majority, to resist widely accepted opinions.
24. The Law of the "Ladder in a (Chinese) Cinema"
Suppose we’re sitting in a cinema. If the people in front of us bring cushions, we will need to sit on something to compensate for the increased height to be able to see. The higher those in front sit, the higher we must sit as well.
Many processes in life fit this example: using AI/digitization/advertising in business, using drones/satellites/EW in warfare, and so on. If you don’t use the “ladder”, you will be at a disadvantage (unable to “see”).
I first heard this example from a Chinese woman who told me about their proverb.
25. The Law of "Wait, You Can Do That?!"
A new, better way of achieving previous or better results is quickly adopted by other “players”. This law is also closely linked to Law 24, as otherwise, you’d be at a disadvantage, continuing to do things the “old way.”
Examples: first-time use of a subscription-based business model (now considered standard); drones in warfare, and more.

26….41. Law: …
Laws that are not mentioned. Feel free to suggest missing laws in a feedback form convenient for you.
42. The Law of ॐ
This is the most complex law; it was told to me. I didn’t understand it until I did an extreme spiritual practice.
I don’t think most will understand it, so I’ll limit myself to these words:
- The living is a source of “waves”: our brain typically finds plausible explanations for what has already been “decided” on the “wave” level.
- Humans are “membranes”: they receive and transmit electromagnetic waves. All people (the living) are connected and influence each other. Through certain wave signals, one can influence people, “synchronize,” and “conduct.”
Explanation: The closest (practical) example I found is in this video.

The Paradox of "Socrates"
"All I know is that I know nothing."(Allegedly) Socrates
"The best knowledge is not knowing that you know something."Lao Tzu
“99.99%” of people believe they “know” more than “Socrates,” including me, paradoxically.
Examples of “incomplete” thinking:
- Obviously, I know what a man/woman is…
- Advertising/movies/cartoons/propaganda don’t influence me.
- Scammers won’t fool me — that won’t happen to me.
- It’s clear what money is; you can read about it in the dictionary.
- …
Laws can and should be combined.
The Beginning of the Journey
Let’s examine our ship in Fig. 1.
We have a very large, multi-tiered communal ship. Some people work in the “engine room” or in the kitchen, others ensure the uninterrupted operation of the electronics, some enjoy themselves carefree on the deck, and so on; there are many different rooms here.
We know some of the previous routes of the ship (it is important to consider that the information is incomplete, and the “course” model should be adjusted as new inputs come in). An example of “routes” includes stories about different situations/civilizations, wars, discoveries, relationships, and statistical data from different strata in various contexts (e.g., the expected failure rate of soldiers during attack/defense in a given situation with a certain amount of ammunition and other variables), and so on.
Based on Law 1. “Simplification” and Law 17. “80/20”, there is a limited number of “parameters” explaining the “route of our ship.” Additionally, from Law 14. “People Don’t Change”, we can infer that if these “parameters” are closer to the “center,” they change much more slowly than the “external” ones. Finally, from Law 12. “Titanic”, it follows that “movement along the route” has a “highly inertial nature.” So, what are these “parameters” close to the “center,” which largely determine our “collective journey”?
It seems to me that these are our values and meanings. That is, values and meanings, both explicitly stated and real, serve as indicators and beacons for the direction of our “movement toward various islands.” By “islands,” I mean where we “end up” as a result of “our voyage”; we may “linger” on some of them, and avoid others, if “upon closer approach” we see that the “island is not so pleasant” — if we manage to “swerve away,” according to Law 12. “Titanic”.
Examples of values or meanings (often the distinction is quite conditional):
- cost optimization, profit, ROI…
- money: dollars, euros, rubles; cryptocurrencies…
- luxury: yachts, watches, brands, gold, diamonds…
- success:
- social approval: titles, certificates, followers, authorship of books/articles, co-authorship with famous people… material signs: car brands, housing location, beautiful women, mistresses…
- health
- avoidance: pain, shame, fears, humiliation…
- pursuit: pleasure…
- resources: electricity, gas, oil…; food, water…
- being trendy, first
- being popular
- feeling part of something bigger
- possessing the “exclusive”: intellect, talent, clothing style, appearance, diamonds, art…
- …
In today’s journey across the ocean, we will encounter only two types of islands: “1+1=2” type islands and “1+1>2” type islands.
VMC Movement: Value-Meaning-Centric Movement
Definition 1. The parameters under consideration that determine the dynamics of societal development — the “movement of our ship” from Fig. 1.
Islands of the type “1+1=2”: On this island, we encounter events or things that are charted on the “onboard maps” along the course of the VMC movement of our ship, with a significant probability of their existence on the island.
Some examples of things or phenomena I have found on “1+1=2” islands include: drones, Tesla, Starlink, deep learning, geometric deep learning, Transformers, Large Language Models, ChatGPT 1-4, Google, etc.
Islands of the type “1+1>2”: These are islands that are encountered unexpectedly — non-obvious events or processes that fundamentally change our existence on the ship.
On “1+1>2” islands, I have come across electricity, the steam engine, atomic energy, the internet, blockchain, possibly the smartphone, and so on.
A Mini VMC Cruise Towards the Islands of “Power and Influence”
The value of power and influence is shared by many people across different continents. In practice, though not always in words, these values often take precedence over the value of other human lives, and so on. As long as this remains true, there will be wars, violence, the seizure of resources and values, not to mention the psychological satisfaction from the process and the result (read: I/we are cooler, stronger, smarter, richer, more democratic than everyone else, etc.). The value of achieving goals with minimal resources is also widespread — meaning it’s highly probable that people will continue to improve in this direction.
This has been relevant for a long time — so it’s logical to assume that we will continue to encounter “1+1=2” type islands on our VMC journey, with various manifestations of the aforementioned attributes. Along this VMC path, on the “1+1=2” type islands, we will find things/tools for quickly neutralising people/obstacles with minimal resources, new flying/swimming/jumping/loitering/underwater and other such apparatuses for these same purposes, in new mediums (space, Mars, under/over water, consciousness, soul, etc.). On these islands, we will come across new tools to influence consciousness (because it’s easier to neutralise a person by influencing their consciousness in a way that makes them controllable/friendly at an early stage), biology/DNA, possibly wave/laser/bacterial/DNA modifiers of an ethnic group, and other such instruments. We don’t yet have names for all such phenomena, only a few — combat drones, nuclear weapons, electronic warfare, Starlink, etc. But we know that the probability of discovering such tools is very high during this VMC journey.
Similarly, we can undertake VMC journeys in other directions: AI, healthcare systems, financial systems, education, and so on.
Example of a Journey to “1+1=2” Type Islands: Kandinsky 7.0 and ChatGPT 42.0
As with any journey, we need to prepare well in advance, study the experiences and routes of previously discovered islands, and gather tools and provisions, so to speak.
How new machine learning models work and where their application boundaries lie, in simple terms
Almost any task we have (translation, facial recognition, text generation, etc.) can be represented as a function or a functional (where the input is a function, and the output is some number, meaning any function “shoots” a number). We can encode this function using numbers — in other words, this is the mathematical modeling of the task.
Elementary functions, which are the foundation of deep learning models, and how they are combined into a unified structure allow us to encode patterned structures of the task at hand (this is especially apparent in tasks involving computer vision).

Approximation theorems serve as theoretical justification for applying deep learning models to practically any task. Patterns and transformation functions are expressed through the model’s parameters:

The general idea behind training modern models is finding patterns and mapping function parameters (transformations of patterns and intermediate data) that minimize errors on training data within the selected mathematical model. For sequential data (text, time series, etc.), we can add “contextual” information (there is a dictionary, and for each data point, we assign weights to relevant data from the dictionary/patterns: a high weight near a word means it is more relevant for the given point/pattern). This is commonly referred to as transformers/attention, but I prefer the term “contextual information” because that’s what our brain does when trying to understand something — it looks at the “context” beyond the direct meaning(s). We then learn the mapping from patterns + contexts to/from the original data. We can omit some mapping functions — in this construction, they will be constants. In short, this is what we are doing right now.
Note that because values and meanings are significantly fewer in number than their manifestations, and due to their relative stability (read: they don’t change quickly, as a rule), the number of parameters we need to keep in mind in our mental model of the world is simplified. Additionally, processes are differentiated into important and less important ones, and finally, they “slow down” proportionally to the rate at which values/meanings change.
The Limitations of Model Application
We are limited by the patterns the model has learned and the transformation functions it uses. If a phenomenon is generated by a different “pattern,” we will only get a projection of that phenomenon onto the manifold of possibilities expressed by the model. If our mapping function from “patterns” to the manifold of describable phenomena is “poorly learned,” we end up with a “hallucination” — a result that does not belong to the manifold of phenomena we are modeling/describing. Since the combination of context/meaning is infinite (and the variations of patterns correspondingly so), this method of modeling will never create true artificial intelligence; there will always be that “uncovered 1%” of cases. This is similar to expressing a signal through some “damping” basis, such as the Fourier series: how many basic elements do we need to express any function? All of them!
The development of artificial intelligence requires a fundamentally different approach to modeling… ultimately, our brain. And by the way, why do we even need artificial intelligence? For me, the answer is not obvious.
For those interested in this topic, Alexey Redozubov (@AlexeyR @habr) has done some interesting research on an alternative path to AI development.
Collective Consciousness
Reality is “wider” than any language; moreover, language largely shapes our reality. Collective consciousness is the maximum number of models based on our brains. A strong artificial intelligence with enhanced computational capabilities might offer us a new experience of living in reality.
What will the Value and Meaning Movement of the ship look like: how will the visit to the islands “1+1=2” under the names Kandinsky 7.0 and ChatGPT 42.0 reflect on our society?
As we sail along the planned course, there will be an increasing “bite” taken out of the templated nature of our collective consciousness, with gradual materialization according to the principle of least resistance (it is easier to replace the mental than the physical). Models will increasingly capture more patterns. According to Law 2: “The Matrix” or “Light Oblomov”, since people tend to think quite automatically, Law 17: “80/20” will likely apply, repeating the same over the remaining “areas” — with the previously mentioned “uncovered 1%” of cases.
As our ship sails the AI winds, we will accelerate: patterned tasks will be completed faster, new patterns will be discovered. The obstacle will often not be technology itself but rather the lack of knowing what to do with the “freed-up people” — until a new purpose for them is found, of course.
It is worth noting that unlike previous technological revolutions, this one is fundamentally different because we are replacing models (and, if we follow the logic of Value and Meaning Movement, robots) with our collective consciousness itself, and only then, as a consequence, other social phenomena (interactions in work, writing texts, etc.) — in other words, we are automating the foundation of foundations. Accordingly, any activity that can be templated or mapped into an algorithm of actions can be automated. This means that we won’t be able to invent work for people unless it is creative (non-templated) or “roughly” physical (until we learn to make bio-robots/robots).
Whether the phenomena found on the islands during our journey will be named ChatGPT 42.0 or Kandinsky 7.0 is not so important. It is clear that these are islands of the type “1+1=2” and that future “versions” of such models will possess the aforementioned properties.
A notable example is the experience of a business consultant who programmed business processes and decision-making in a large company, resulting in 90% of the top management being redundant and consequently laid off (90% of situations were standard and could be resolved without involving upper management).
During the journey, it is very likely we will encounter an island of the type “1+1>2” — a fundamentally different way of modeling… the brain.
Clear examples of the presence of patterns, which are becoming more apparent in everyday activities, accelerating our movement: patterning in the work of Data Scientists/Engineers, patterns in miro, Notion, Evernote, presentations, business: 55 business model patterns, etc. Essentially, all books on how to be successful, make friends, present material, etc., are about learning patterns in different areas (and most of them boil down to the same internal patterns and experiences, but that’s another topic).
Typically, we self-identify — meaning we “stick” to our mental model, which is why advertising campaigns, propaganda, television, diversionary operations, and so on are so effective — they mainly rely on meta-mental models. Pulling a person out of a calm state into an emotional one (for example, triggering fear, psychological trauma, complexes, humiliation, dislike, pride, etc.) reduces awareness and, as a result, activates their mental model, limiting the ability to think critically.
Criticism and Limits of This Mental Model
- This is a very simple model that does not take into account many aspects: consciousness, collective consciousness/unconsciousness, spirituality, and others.
- The model fully relies on a “positivist scientific view,” meaning the more we know, the more accurate the model becomes. Ideally, having access to “closed” information would allow many crises and wars to belong to the “1+1=2” island rather than the “1+1>2” island, at least long before arrival. We would then have a higher likelihood of knowing which island we are approaching.
- It is advisable to use different mental models depending on the task and context. Professions that contribute to the application of more complex models include: psychologists/psychiatrists, sociologists, intelligence services/diversion operations, marketers/branding specialists, trust-based scammers/social engineers, philosophers/spiritual practitioners, investors (G. Soros, C. Munger, W. Buffet), strategic politicians (V. Surkov, A. Arestovich, Z. Brzezinski, etc.), and others (“Professor Moriarty”). In parentheses, I have listed examples of people who, based on my observations, most likely use more advanced mental models, which we will call meta-mental models. There are many meta-mental models (example of a meta-mental model presented by Felix Shmidel). In the context of such models, other mental models are also operated to represent reality. Ideally, if there is an opportunity to influence society’s “reality” through collective consciousness, staying in the shadows is the most advantageous, and it is thought that many people with incredibly complex meta-mental models are unknown to us and familiar only to a very narrow circle of people. It is worth noting that now, in addition to these people’s highly complex models, closed information, and advanced modern knowledge about brain structure and the ability to influence collective consciousness, AI technologies are also being added (Palantir & Co, etc.).
- I will point out that the most complex type of models is spiritual meta-mental models. Beyond them lies the spiritual itself. Based on observations, individuals like A. Arestovich, V. Surkov, V. Putin; G. Rasputin, G. Gurdjieff, Osho, I. Stalin, Joan of Arc (historical figures), and others fall into this category. One of the many complexities in understanding their mental worldview is that their main values and meanings often lie beyond the material world, and in their uniqueness. Consequently, while most people attach great importance to houses, yachts, money, and other material possessions, for these individuals, those things likely have a completely different significance—they invest different symbols into them. Since we mostly evaluate based on our worldview, the difficulty in understanding arises from this.
- Indeterminism: any mental model is a probabilistic model (though the probability may be nearly 100%).
Why It’s Important to Start Rethinking How and Why We Interact with Each Other
The pursuit of high margins leads to situations where even a trip to a mistress becomes a military operation.F. Shmidel
In the realm of “doing,” especially templated or repetitive actions, any machine will outperform us; see 19. Law “Terminator.” The templates may be non-trivial and very complex, but sooner or later, new models will learn these too (many of these templates may surprise most people). Within the framework of our current interactions, we have now reached a point where we can teach machines complex templates and sophisticated mapping functions, taking into account contextual information.
We need to decide: will we continue our CsM-movement on our ship under the same winds, or…?
Will we see those “others” who may be weaker/slower/less foresighted, who have not yet learned or whose time hasn’t come to operate with meta-mental models, as merely a source of benefits for achieving goals, “batteries” for resources, “bio-robots”? Or will we recognize the inherent value in them as living beings?
Simply because, in this vast galaxy, a miracle has occurred—before us is a living being that, for some reason, appeared here just like you, my dear reader!
Please note that I have not even touched on (geo-political) conflict questions, assuming that we are capable of negotiating with one another.
Summary:
- Our socio-economic system is unbalanced and inevitably leads to the stratification of society (both within countries and on a global level), which will ultimately result in segregation (presumed classes: “elite,” “service class” (physical, “spiritual,” and intellectual), “security,” “bio-mass”).
- Although it is already showing cracks, it is unclear what form it will take in the next 2-10 years.
- AI and automation significantly accelerate the aforementioned dynamics of wealth distribution.
- AI and automation are a revolution in the means of production, which will inevitably lead to social tension and conflict between different social classes.
- Elites, in most cases, are just like us; we all worry about our savings, quality of life, and our children’s futures, and we act based on our mental models and experiences. The pursuit of wealth by I. Musk, B. Gates, W. Buffett, R. Abramovich, J. Bezos, and others beyond what they currently have makes no sense from a numerical standpoint. Whether W. Buffett, for example, has 10 billion more or less will hardly affect his actual standard of living, not to mention the fact that you can’t take anything with you. Earning beyond a reasonable measure resembles:
I fight... simply because I fight!Porthos
The solution, in my view, should be a movement towards a society where everyone can continue doing what they love (e.g., W. Buffett investing, etc.) while maintaining a “normal” standard of living, but where the means of production, as a result of economic activity, are gradually distributed more evenly through consensual agreements. This would lead to a gradual improvement in wealth distribution.
Regarding the ability to negotiate:
We can't agree on anything, any tool or resource becomes a weapon.A. Kubyshkin
Any technology, philosophy, concept… becomes a weapon: democracy, rumors, car/ship insurance, financial accounting protocols, money, life resources, talents, the internet, history, good intentions, and in any medium: Earth, air, water, underwater, space, consciousness, soul…
What’s Next
The main question: Why and how do we interact with each other? within relationships, companies, cities, countries, Earth… how else can we interact? The answer to this question, it seems to me, lies in deciding what is primary-secondary:
(a)
- Mutual aid (compassion and mercy)
- Competition
or
(b)
- Competition
- Mutual aid (compassion and mercy)
If (b), then we should take an example from nature and animals. For instance, a pride of lions, having captured a territory and defeated the previous leaders, first kills the cubs of the former leaders—
it’s easier to get rid of competitors while they’re weak, ideally, while they’re embryos or not born at all.
Accordingly, it is most profitable to compete by eliminating the children of our competitors—either by killing them, creating unfavorable conditions for their development, feeding them harmful products, and so on. Instead of blocking Ukrainian products, it would be better to neutralize all Ukrainian, African children entirely. And we should treat others’ children the same way, teaching them skills that hinder adaptation and development. It’s better to develop DNA-based weapons to make women of a particular nationality/group infertile, then we wouldn’t need to fight adult warriors later. Let the profit be with us, for it’s the most important thing, isn’t it?
But if (a), the direction I see is improving the quality of life (in many dimensions) for all participants—enhancing well-being. The criterion for correct movement is to “make it not worse than it was.”
Whatever they say to me,
I will still say mine:
What lies ahead—who knows,
But what’s mine—is mine!
I. A. Krylov
Perhaps this doesn’t sound too glamorous or ambitious, but in this formula there is also a comparison with what was—taking the best from the past, learning from mistakes and experience, and cautiously moving toward the future.
Trying to make everything “fair”/“equal” is a utopia.
It’s also a utopia if someone claims I know what’s best for everyone—the system governing our ship has become too complex. We should focus on this vector and on the continuous feedback from the passengers of the ship.
Please note that with such a movement, passengers will inevitably have to turn their attention inward to understand what they need and whether things have improved, which will inevitably lead to a rethinking of consumer culture and other values. With this direction, we don’t necessarily need a strong artificial intelligence—an argument made by some thinkers (e.g., for resource allocation); though there is no conceptual contradiction either.
In the following articles, I’ll discuss the options we have and why thinking in terms of capitalism and/or socialism and/or … another …ism is inherently flawed in its premise.
Let the profit be with you, for that’s what matters most, isn’t it?
I’m afraid of becoming like those grown-ups who are only interested in numbers...The Little Prince, A. de Saint-Exupéry
Examples of how I process information, materials with key points and my comments
1. Alexander Kubyshkin, "AI, World Economy, What to Expect" (ENG), key points:
- Energy is the foundation
- Everything in this world—economic (mental, spiritual, cultural)—is a weapon, impossible to agree on anything // political weapon
- 6th technological wave
- 1-5 waves (x2 more energy with each wave/transition)
- New technologies = more energy
- New technology: closed-loop energy production // untested
- Skeptical about the green energy transition (example: a space station with ideal batteries and conditions), limited space + few resources for batteries, need to focus on the “bottleneck” = China
- Crypto/blockchain + AI
- We won’t live in a globalized world!!!
- Supply chains will be localized
- AI will aid this, global world won’t persist
- Technologies are advancing faster than society can adapt
- No initiatives to take risks in governance
- Personal data through blockchain
- Banning decentralization technology
- Moving towards a “China” model
- Paradigm shift in thinking (capitalism)
- Growth impossible if resources are limited
- No room to expand—space; Musk is trying despite atomic tech being frozen for 25 years after Chernobyl—conflicts will arise
- 40% of NASDAQ companies are “lala-land” companies—not clear what, how, or to whom they sell and whether they will achieve their promises
- If we calculate everything (impact on nature, cost of mining minerals, etc.)—the cleanest form of energy is nuclear
- Rising risks, instability, need for quick reaction + feedback signal
- Only loser—Europe:
- No energy
- Governance system ineffective (politicians like clowns—unable to resolve anything)
- Transition to a new economy impossible without war: the question is whether it will be a big one or not
- USA/China—two ways of organizing societies
- China’s model may be adopted by other countries during turbulence
- Japan prediction (demography problem)—robots will become “part” of society
- Conflict during turbulence not necessarily between 2 countries but many
- Crimean Wars of the 1900s allowed Britain to maintain dominance for another 100 years
- Schwab’s idea of inclusive capitalism: elites-good, others-slaves
- Book: Tom Burgis “Kleptonia: How dirty money is conquering the world”
- Corruption at the top—money in the West—through banking system control of money and top leaders—states (e.g., Kazakhstan)
- Financial bubble must burst (40 years), and only then can something be built
- Money
- Taxation—a state mechanism to encourage people to do something
- Not created by the state, but by banks—less than 1% of money is state-created—95% of “money” is from the private banking sector (though I think this is underestimated). Nobody can define what money is (joke: in the US they can’t define what a woman is either). Banks have different methods of printing/creating money, thus forming financial reports throughout the system: central bank, banks, non-bank financial institutions: 450-470 trillion dollars. Global economy: 90 trillion (financial values in GDP x6). Classical banks: 200 trillion. Shadow “banks” (hedge funds, insurance…): 250 trillion. Central banks: 40 trillion. We don’t know exactly how the system works, we don’t know what money is, what is considered money…
- 50% of all transfers are in dollars; 95% of all derivatives are in dollars
- 2% are not dollars
- Only 50% of apartments in NY are occupied
YouTube channel Alexander Kubyshkin
2. Economist on AI & Society, key points:
- Skeptical—current technologies aren’t “there” yet
- The risk is not the extinction of humanity, but biases with discriminatory social consequences, privacy, intellectual property, etc.
3. Sam Altman | Lex Fridman #367 (ENG), key points:
-
OpenAI has made huge breakthroughs in AI.
-
However, it’s important to note AI risks (according to Lex) — we are on the brink of significant changes (expected within our lifetime).
-
This is inspiring: we know many applications, but many remain unknown. AI could help us defeat poverty and bring humanity closer to happiness, which we all strive for. On what basis is this assumption made? There is the danger that we might be destroyed — akin to 1984 by Orwell or Brave New World by Huxley — losing our ability to think.
-
We need to find a balance between AGI’s potential and its dangers.
-
In 2015, OpenAI declared its goal of working on strong AI, and many in the field laughed or mocked them. Sam admits that poor marketing and branding contributed to this perception.
-
Lex notes that DeepMind & OpenAI were small groups unafraid to declare their goal of creating AGI.
-
Sam: Now, nobody mocks us anymore (they used to laugh at the idea of building AGI).
-
This conversation is about power, the psychology of AI creators, and more.
-
Why GPT-4 and what is it?
-
Sam: GPT-4 is a system we’ll one day look back on and say it was the first AI, slow, buggy, etc., but it will be significant in the future, like computers are today.
-
Sam: I believe progress is exponential, and we won’t be able to point to a specific moment where AGI happened.
-
Lex: There’s too much data! The bigger problem is filtering it.
-
Sam: Many pieces need to come together in one pipeline — either by coming up with new ideas or implementing existing ones well.
-
Lex quotes, “ChatGPT teaches something….”
-
Sam: The most important thing is, ‘How useful is it to people?’
-
Lex: I’m not sure we’ll ever fully understand these models, because they compress the entire internet and texts into a small number of parameters.
-
Sam: “For certain definitions of intelligence and reasoning, ChatGPT can do it at a certain level — it’s somewhat incredible.”
-
Lex: Yes, many would agree, especially those who’ve interacted with ChatGPT.
-
Sam: I agree with the overall ideas.
-
Lex: Peterson asked ChatGPT if Jordan Peterson is a fascist.
-
Sam: “The questions people ask say a lot about them.” Peterson was asking about message lengths and concluded that ChatGPT lies and knows it lies… because it couldn’t count the length of its messages correctly.
-
Sam: I hope that these models will bring nuance back to life! On what basis is this hope/prediction grounded? Will the model really return objective, well-balanced answers to complex questions? This is not obvious to me, but maybe I’m wrong.
-
Sam: Progress is exponential in nature… It’s impossible to pinpoint the exact “moment” (of a breakthrough).
-
Sam: We continue to fine-tune LLM models with feedback from people (reinforcement learning), otherwise, the models wouldn’t be as useful.
-
Sam: Some things that seem trivial to us are not for the model (e.g., counting the number of characters in a message).
-
Sam: We release models to the public early because we believe that if this is going to shape the future, collective intelligence will help identify the “good” and “bad” aspects of what the model does.
-
We could never have discovered things inside companies without the feedback from “collective intelligence.” It’s an iterative process of improvement.
-
GPT-4 is much better and less biased than 3.5… I wasn’t particularly proud of the biases in 3.5. !!! Interesting—what role does ego play here, and what role does the desire to “make the world better” play? How can you decide what’s better for others if you haven’t experienced their challenges (e.g., poverty)? !!!
-
Sam: There will never be two people who agree on the quality of the model. I think the solution is to eventually give personalized control to individuals over the model.
-
Sam: I always dreamed of building AI, but I didn’t think I’d have the chance to do it.
-
I couldn’t imagine, after the very first “primitive” iteration, that I’d have to spend my time answering people’s concerns about the model’s performance: “How many characters are in the answer to a question about two different people?” (hinting that this doesn’t matter much). “You give people AGI, and that’s what they focus on?!” So, you’re building something incredibly powerful, making it accessible to everyone, and you don’t know what they’ll do with it… Maybe we should decide what we want to do with this first before creating it? !!! This is very important (comments about character counts and similar things). !!! Contradiction with previous statements; political correctness or lack of sincerity? !!!… Perhaps… But I wouldn’t have guessed that in advance! Is he trolling? What else might happen that you haven’t anticipated? Worst-case scenario? Best? Most likely?
-
Lex: How much do you spend on safety concerns?
-
Sam: We invest a lot in “alignment” (safety, reducing bias, doing “all good things against all bad things”). I’m proud of the work done on GPT-4 and the tests we conducted. (!!! He didn’t specify how much exactly. !!!)
-
Lex: Could you share your experiences with alignment?
-
Sam: We still haven’t found a way to do alignment for super-powerful systems.
-
Not everyone understands the nuances, and the challenge of alignment is as complex as all the other work we’re doing in creating and making these models useful (like DALL·E).
-
Sam: We need to come to an agreement on the boundaries of these models (what’s good/bad).
-
People will have control over what they can do, depending on the query (e.g., “answer like xxx”). Prompt engineering: learning from interaction.
-
Lex: Since this is based on “our” data, we might learn something about ourselves.
-
With experience, the system gets smarter and smarter, and it feels more like communicating with a person.
-
Lex: I think LLM/ChatGPT will change the “nature” of programming/how we program.
-
Sam: This is already happening, and it’s the biggest short-term change (allowing people to do their jobs and creative tasks better and better). !!! Better or faster? Will this kill creativity in the long run? Will this atrophy the skills needed to develop expertise, such as the long road of becoming a skilled programmer after being inexperienced? !!!
-
Lex/Sam: You can ask for code generation, and in new systems, you can iteratively improve it by “conversing with the model.” A new way of debugging?
-
Sam: A creative-dialogue partner… This is a very big thing! !!! He views it as an assistant… But why do things faster? Is it always necessary to do them faster and more efficiently? What are the long-term consequences: What qualities and skills will these systems foster in our children/next generations? Will it lead to laziness? If something can be done in a few clicks, why bother to exert effort? !!!
-
They’re trying to define what “hate speech” is, and what models should do with hate and other human qualities.
-
They bring up the example of a constitution as an analogy for a social contract… created democratically. !!! Are we sure democracy is what’s needed in this context? If you let children decide, they will eat candy and play games… Hitler came to power through democratic elections… What’s the foundation for this assumption? !!!
-
Lex: How bad is the baseline model — Sam dodged the answer. !!! Openness means being transparent, especially about unpleasant things. !!!
-
Sam: We’re open/welcome criticism… I’m not sure I understand all the consequences of how criticism affects us… I’m not sure.
-
Sam: I don’t like when a computer “teaches” me… It’s important to remember that I “control” the process, and I can “throw out” the computer. !!! Not sure if he’s sincere here or just playing to people’s fear of losing control… maybe it’s a smart salesperson’s manipulation — showing that he understands and shares your fears… He’s like you… You’re in the “same boat.” !!!
-
I often say in the office that we should treat people like adults. !!! What’s this assumption based on? What does it mean to be an adult? Empirical data shows that not everyone is psychologically mature (whatever that means… I’ll leave definitions aside here); this also contradicts the earlier statement that “models will bring nuance back to discussions,” implying the model will play the role of “teacher/adult”: so is the model supposed to teach people or not? Are we sure the majority of people are adults? This is fundamental to understanding how we should interact with people. !!!
-
Sam: The difference between ChatGPT 3.5 and 4 is huge. We at OpenAI are good at finding small technical improvements, and with enough incremental improvements, we got GPT-4 (improvements at every step).

-
Lex and Sam discuss the training cost and the number of parameters in AI models, comparing it to the number of connections in the brain. Sam emphasizes how impressive it is that the AI model has been trained on all human data produced throughout history, compressing the entire human experience into its parameters.
-
Lex asks about the number of parameters in the model. Sam explains that OpenAI focuses on results rather than chasing the number of parameters in the model. “We avoid the trap of parameter count competition,” he says. Instead, they prioritize making the models work well, although Sam avoids directly answering how many parameters the model has.
-
Lex brings up Noam Chomsky’s critique, who doubts that Large Language Models (LLMs) can lead to AGI (Artificial General Intelligence). Sam acknowledges that LLMs are part of the equation but insists that other essential pieces are still required. Expanding the paradigm of models like ChatGPT may introduce new ideas that can advance AGI.
-
Sam reflects on the possibility of ChatGPT reaching AGI status in the future, noting that if an oracle (referencing the ancient Greek Oracle of Delphi) told him that ChatGPT-10 would be AGI, he wouldn’t be surprised. He says “It might not require a big breakthrough,” but at the same time, AGI might never be achieved. Instead, Sam sees value in making people “super great” in what they already do, though this raises questions about the current trajectory of humanity, including conflicts and wars.
-
Sam adds that a system that can’t generate radically new scientific knowledge isn’t a “superintelligence.” He admits that there is still much to learn. “I don’t know what strong AI is, but I want to build it.” Interestingly, Sam shares that he has built a bunker to prepare for a potential collapse, hinting at the complexity and potential risks involved with developing such technologies.
-
Sam expresses excitement for a future where ChatGPT becomes a tool that extends human capabilities and “amplifies people’s will.” However, this raises concerns about preparing for survival scenarios, including building bunkers. If the tool is intended to empower humanity, why focus on survival preparations? Sam’s readiness for a collapse contradicts the optimism surrounding AI as a tool for advancing human development. “Are we sure we’re not just speeding towards the same old problems?” Sam wonders whether we might need to rethink our values and ways of thinking before rushing ahead with technological advancements.

-
Lex: I enjoy programming with AI;
-
There are concerns that AI will take programmers’ jobs; if it takes your job, it means you’re not a good enough worker/programmer - this is only partially true; there is a creative component !!! logically incorrect statement: if 80% of the work is standardized, it doesn’t matter what level of programmer you are, it’s cheaper to do it with AI; competence is needed for a small portion of jobs, usually already templated !!! Sam: many programmers are inspired by AI’s capabilities, thinking they can be 10x more productive; few are turning off this option !!! this is a typical application of Law 24. “Ladders in a (Chinese) cinema”: at some point, if you don’t use it, you’ll lose in the “collective race,” so the moment of choice - to turn it off or not - is taken away from a person !!!
-
Sam: Kasparov, after losing to a computer, said chess was finished, but chess has never been more popular;
-
We don’t watch AI play against each other!!! Are you sure we get more enjoyment from the game? Or maybe the element of “surprising/unexpected” moves has been lost, replaced by dry computer calculations? Are we sure that by becoming more like “machines,” life is better/more interesting for us? !!!
-
Lex: AI will get better and blah blah, but we will need emotions/imperfectness…
-
Sam: The extent to which we can raise people’s standard of living with AI is just incredible; we can do amazing things!!! On what is this assumption based? Aren’t suicide rates higher in Japan/South Korea/Switzerland/Scandinavian countries than in Jamaica/Bali…? Aren’t people dying out there? Are they happier than in places with less wealth? You decide to improve something (standard of living in this case): but what is the cause of this state of affairs? Can you solve a problem without understanding its root cause?! Perhaps if we understand it, AI wouldn’t even be necessary? Perhaps it’s our greed? Hypocrisy? Lobbying for military factories and their products? The politics of implicit colonization? The use of slave labor in other countries? Provoking wars where it’s profitable for us to gain cheap access to resources (oil, lithium, cobalt, palladium, uranium…)? Is it profitable for us that other people live in poverty and misery, so we can buy their resources, goods, “brains” (freelancers) for cheap (like the production cost of Apple phones being less than $20, for example), etc.? Maybe we should understand the cause of the problem before doing anything? !!! but people like to feel useful, the drama… and we’ll find a way to provide that!!! But will AI make them useless?! Maybe we, humans, should decide what we need, and you can focus on your life… without “causing us any good”? !!!
-
Lex: Eliezer Yudkowsky is an example of someone who warns against strong AI, as he believes it will destroy humanity.
-
The main message: it is impossible to control superintelligence. Sam: there is a chance he’s right. We need to focus on solving this problem with an approach: iterative improvement/early feedback-response/limiting “one-shot” scenarios. Eliezer explained why AI alignment (“making AI aligned with human interests”) is such a complex problem. Sam believes there are logical flaws (doesn’t specify), but Eliezer’s arguments are generally well thought out.
-
Lex: Will strong AI emerge suddenly or gradually? Will everyday life change?
-
Sam: I think (Lex agrees) that for the “world, it’s better for strong AI to emerge gradually and slowly.”
-
I’m afraid of rapid development of strong AI.
-
Sam: Do you think GPT-4 is strong AI?
-
Lex: I think, like with videos of UFOs, we would know immediately. From my interactions with GPT-4, it seems not all its capabilities are available.
-
Sam: I think GPT-4 is not strong AI!!! dodged the question about unavailable features!!!
-
Lex: Do you think GPT-4 has consciousness?
-
Sam: I don’t think so. Do you think GPT-4 has consciousness?
-
Lex: I think it can pretend to have consciousness.
-
Lex: I believe AI will have consciousness. How will that work? Will it experience suffering, memory, communication, etc.?
-
Sam: I’ll refer to what Ilya (Sutskever) said earlier about how we’ll know if a model has consciousness: if there’s nothing in the training data “about consciousness,” but the model can answer that “it understands what consciousness feels like” (loosely translated)—then, there you go… (there’s consciousness).
-
Lex: Consciousness is about experiencing life (Sam thinks consciousness doesn’t include emotions); in the film referenced, a robot/AI smiles to itself (experiencing the experience for the sake of experiencing it) — a sign of consciousness… he claims emotions play an important, possibly leading, role.
-
They can’t provide a clear answer about consciousness.
-
Lex: What are the reasons that, during the development of strong AI, something might go wrong? What could go wrong? You said you’re very inspired by the developments, but also a little afraid…
-
Sam: Yes, I’m a little afraid, but I think it’s strange not to be afraid at all.
-
I sympathize with people who are very afraid!!! “we’re on the same side, I sympathize with you”; this could be an effective communication technique without any real substance!!! !!! did not answer the question !!!
-
Lex: Do you think you’ll recognize the moment when the system becomes super-intelligent?
-
Sam: My current concerns…
-
There will be issues with deception/disinformation, economic shocks/crises, something far beyond what we’re prepared for. These problems don’t require super AI, and I think not enough attention is being paid to these issues… even before strong AI!!! He didn’t answer the question but pointed out other things we should worry about before we start talking about strong AI!!!
-
Lex: So, do you think even scaling current models can change geopolitics, etc.?
-
Sam: How do we know that, for example, LLMs aren’t already controlling the flow of information/discussions on Twitter, and so on?
-
We can’t know! And that’s the danger… maybe regulation, stronger AI!!! So, we’ve created a powerful tool for generating deception, made it powerful and “smart enough,” but can’t control the consequences of its use… considering how much influence social media, etc., have had — like revolutions triggered through them. Not exactly making the world better…!!!
-
Lex: How do you deal with market pressures (big companies like Google, open-source projects, etc.)? How do you prioritize?
-
Sam: I stick to my beliefs and mission!!! Where did this mission come from? What’s its “source” or foundation? Hitler and others had a mission too. Are you sure the “source” of this mission is pure?!!!
-
There will be many strong AIs, and we will offer one of them… diversity is good. At first, people laughed at us, but we were brave enough to announce that we’re working on strong AI…!!! Why is it important that you were brave? What role does ego play here, and what part is the desire to help others by making the world better?!!!
-
Sam: OpenAI has a complex structure… a nonprofit controls us, but a purely nonprofit structure didn’t work for us. We came to understand that we need some “benefits of capitalism.”
-
I think no one wants to destroy the world, so capitalism, etc., will lead to “good angels” and companies prevailing in the end!!! This is a logical fallacy… and a dangerous, unfounded assumption. Consider Hitler… people commit suicide, people consciously/unconsciously destroy their lives. We need facts, not…!!!
-
We discuss together and try to mitigate terrible risks.
-
Lex: No one wants to destroy the world!!! An unfounded assumption!!!
-
Lex: It might happen that a couple of guys in a room will say to each other, “Holy crap!!” (upon seeing the results of their work).
-
Sam: This already happens more often than you might think.
-
Lex: This could make you the most powerful people in the world; are you worried that power might corrupt you?
-
Sam: Of course! I think decisions on how to use AI should become more and more democratized (note: more people having a say), but we don’t yet know how this can be done. One reason is to give the world (note: people) time to adapt, reflect, and pass laws.
-
Sam: It’s really bad if one person gets access to these technologies; I do NOT want any privileges in the board of directors!!! What happened to your board of directors?!!!
-
Lex: Creating strong AI gives immense power.
-
Sam: (note: asks questions) Do you think we are doing well? What can we improve? On Lex’s note about more openness, he asked if opening access to ChatGPT-4 was an option. Lex said no, he trusts many people at OpenAI and believes in them.
-
Sam: Google probably wouldn’t have opened an API, but we did; maybe we’re not as open as some would like, but we do a lot.
-
Lex agrees and says they’re less concerned about PR risks and more about the availability of the technology (note: the risks related to its application).
-
Sam: He claims people at OpenAI feel responsibility!!! To whom?!!! for the results of their work; he shows openness to ideas on how they can do things better and says he gets feedback from conversations like this!!! In interviews? Are the most important issues really discussed here?!!!
-
Lex: What do you and Elon Musk agree on regarding strong AI?
-
Sam: We agree on assessing risks and dangers and that as a result of strong AI, people’s lives, in general, should improve compared to if it had never existed!!! How do you think this should be achieved? What role should strong AI play in this?!!! Sam often asks counter-questions to “uncomfortable” remarks.!!!
-
Lex: What do you agree and disagree with Elon Musk on regarding AI?
-
Sam: We agree on the potential risks of strong AI.
-
We agree that after the appearance of strong AI, people on Earth should live better than if strong AI had never been created. Elon attacks us on Twitter and in other ways on several fronts. I empathize with him / sympathize because I think he’s very concerned about the safety issues of strong AI!!! Sam frequently uses this expression — I have sympathy/empathy for… — perhaps it’s an effective communication tactic without any real emotions behind it!!! I’m sure there are other motives behind his (Elon’s) complaints!!! Is he hinting at some game or dishonesty?!!! I saw an old video of Elon where he talked about SpaceX, and many famous people in the space industry criticized him. Elon said they were his heroes, and it hurt him to be so heavily criticized. He wished they could see how hard they (Elon and co.) were working to make their vision a reality. Elon was Sam’s hero, even though he’s a jerk on Twitter!!! Strong personal judgment.!!! I’m glad he exists, but I wish he would recognize more of our hard work to do things right!!! Very clever communication tactic — using criticism of Elon himself as an example.!!!
-
Lex: More love! What do you appreciate about Elon Musk in the context of love/acceptance?
-
Sam: He moved the world forward in significant ways.
-
Electric transportation, space… even though he’s a jerk!!! second assessment!!! He’s a fun and warm guy.
-
Lex: I enjoy the diversity (of opinions, people…) and the real battle of opinions happening on Twitter…
-
Sam: Maybe I should respond… but it’s not my style… (maybe I’ll answer someday).
-
Lex: You both (Sam + Elon) are good people, deeply concerned about strong AI and with a lot of hope and belief in it!!! Maybe we should be more concerned with people rather than technology?!!!
-
Lex: Quoting Elon Musk, it seems chatGPT is too WOKE (too tolerant, LGBTQ+, gender issues, etc.). Is this a question of bias?
-
Sam: I don’t even know what WOKE means anymore!!! doubt!!! The first GPTs were too biased; there will never be a version of chatGPT that the world agrees is unbiased!!! Good communication tactic — stating that something is unattainable so as not to answer uncomfortable questions about bias!!!
-
We will try to make the base model more neutral.
-
Lex: A few words about the bias of the model related to the company’s employees?
-
Sam: There is 100% such bias. We try to avoid being stuck in the “San Francisco cognitive bubble.” I plan to travel the world and talk to people (clients) in entirely different contexts (as I often did at YC)!!! This shows that he is consciously trying to use meta-mental models, to understand and model other people.!!!
-
I think we’re better than others at avoiding the “crazy San Francisco bubble,” but I still think we’re pretty deep in it!!! Admitting imperfection — a very good communicative tactic; it builds trust and rapport.!!!
-
Lex: Can you distinguish between the model’s bias and the bias of the employees?
-
Sam: What worries me the most is the bias of the people who fine-tune the model afterward! This is the part we understand the least.
-
-> This is the central and most important point of the entire interview and, in principle, the entire topic: the problem is that we don’t know how to understand and listen to each other. If we don’t solve this, no AI, no strong AI, will help us come to agreements, and we will continue to fight, only now with AI and its derivatives.
===================================== At this point, I will stop, because:
a. I don’t see the point in writing further if the “basis” is already clear, and there isn’t enough focus on it.
b. It is energy-consuming, and I’m not sure if it’s interesting or useful to anyone.
General comments/questions:
- You are giving people a powerful tool with the level of development and consciousness (values, meanings…) they have now, which means they will pursue the same needs and purposes as before, but with a new tool. This means that all processes will accelerate: are you sure we’re not on the “Titanic,” speeding up its course?
- You can’t foresee all the outcomes within one company, so why take on the responsibility to think and act for everyone else?
- What role do your ego and ambitions play here?
- Is it true that you are building/have built a bunker (like Zuckerberg and others)? If so, why? If we’re not on the “Titanic,” why do you need a bunker? And if we are on the “Titanic,” why are you speeding it up?
- Why doesn’t the company’s name reflect its mission: why not open-source the code? How are the models trained? What data do you use? Isn’t the original goal to create OPEN AI for all humanity? Maybe then it should be open-source?
- These models are trained on the collective work of all humanity (texts, translations, designs, artwork, videos — already in the new model — games, code). Why does someone have the right to claim ownership of these models, which wouldn’t exist without the collective labor of all people? Shouldn’t they be open-source then?
- Universal basic income: a big mistake because it doesn’t take into account human nature — there will be a lot of degradation.
4. Sam Altman | Lex Fridman #419 (ENG), main points and comparison with the first interview:
I don’t see the point in analyzing the second interview and comparing it with the first—it’s unclear if anyone needs or is interested in it. If someone is, I will review and analyze it later, comparing both.
I haven’t watched it yet—most likely, there won’t be anything new for me.
A person is already so arranged that we look at happiness reluctantly and distrustfully, so happiness has to be imposed on us.М. Е. Saltykov-Shchedrin
You can schedule a meeting via Calendly, or via Read.ai, write to me at Email or in Telegram.