Welcome to my little corner of the World Wide Web!

Conversation with LLMs #03. Asking an AI about its Godfather Geoff

Screenshot from a browser with a chat with Gemini AI (the same chat as the one in the post).

Hi! What do you think about what your godfather said the other day? https://youtu.be/vxkBE23zDmQ?si=Zv7jj0geCZld-dw0

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->I found some search results about the movie “The Godfather.” To find what your godfather said, I will search on YouTube.
I am unable to access the YouTube link you provided directly. However, you can try searching for the video title or the channel name of your godfather on YouTube to find the video and what he said.

Come on! Don’t you know who your godfather is? It’s Geoff! This is what he said the other in an interview with Andrew Marr:
“Andre Marr: for ordinary gentle herbivorous souls like myself there are all the other obvious questions about AI we hear it might save Mankind we hear it might destroy mankind what meanwhile about all the jobs it’s likely to wipe out what about robots slipping out of human control and doing their own thing so many questions and there’s really only one obvious person to go to First for some answers and that is Professor Jeffrey Hinton the Nobel prize winning Bri scientist who wrote the structures and the algorithms behind artificial intelligence and a man known around the world today as the Godfather of AI he’s now a professor at Toronto University and I’m delighted to say he talked to me this afternoon I began by asking him about deep seek was this further evidence of his belief that artificial intelligence was constantly accelerating. Geoff Hinton: it shows the still very rapid progress in making AI more efficient and in developing it further I think the relative size or relative cost of deep seek relative to other things like open Ai and Germany has been exaggerated a bit so their figure of 5.7 million for training it was just for the final training run if you compare that with things from open AI their final training runs were probably only $100 million or something like that so it’s not it’s not 5.7 million versus billions. Andre Marr: when you say that uh AI might take over at the moment it is a relatively harmless or innocuous seeming uh device which allows us to ask questions and and get answers more quickly how in in Practical and real terms might AI take over. Geoff Hinton: well people are developing AI agents that can actually do things they can order stuff for you on the web and pay with your credit card and stuff like that and as soon as you have agents um you get a much greater chance of them taking over so to make an effective agent you have to give it the ability to create sub goals like if you want to get to America your sub goal is get to the airport and you can focus on that now if you have an AI agent that can create its own sub goals it’ll very quickly realize a very good subgoal is to get more control because if you get more control you’re better at achieving all those goals people have set you and so it’s fairly clear they’ll try and get more control and that’s not good. Andre Marr: you say they try to get more control as if they are already thinking devices as if they think in a in a way analogous to the way we think is that really what you believe. Geoff Hinton: yes the best model we have of how we think is these things there was an old model for a long time um in AI um where the idea was that thought was applying rules to symbolic expressions in your head and most people in AI thought it has to be like that that’s the only way it could work there were a few crazy people who said no no it’s a big neural network and it works by all these neurons interacting um it turns out that’s been much better at doing reasoning than anything the symbolic AI people could produce and now it’s doing reasoning using neural networks okay and of course you are one of the crazy people proved right. Andre Marr: um and yet you know you’ve taken me to the airport you’ve given it agency up to a point and you’ve said that it wants to control a little bit more power from take power from me and presumably it will be persuasive in that but I still don’t understand how it’s going to take over from me or take over from us if there’s ever evolutionary competition between super intelligences. Geoff Hinton: imagine imagine that they’re much cleverer than us like an adult versus a three-year-old and suppose the three-year-olds were in charge and you got fed up with that and you decided you could just make things more efficient if you took over it wouldn’t be very difficult for you to persuade a bunch of three-year-olds to seed power to you you just tell them you get free candy for a week and you there there you be so they they would as a I I’m talking about they as if they’re in some kind of alien intelligence but AI would persuade us to give it more and more power what over our bank accounts over our military systems over our economies is that what you fear that could well happen yes and they are alien intelligences gosh. Andre Marr: so you got these alien intelligences working their way into our economy in the way we think and as I say our military systems but what why and at what point would they actually want to replace us surely they are in the end very very clever tools for us they’re what you know they do ultimately what we want them to do if we want them to go to war with Russia or whatever that’s what they will do okay that’s what we would like we would like them to be just tools that do what we want even when they’re cleverer than us. Geoff Hinton: but the first thing to ask is how many examples do you know of more intelligent things being controlled by much less intelligent things there are examples of course in human societies of um stupid people controlling intelligent people but that’s just a small difference in intelligence with big differences in intelligence there aren’t any examples the only one I can think of is a mother and baby and evolution put a lot of work into allowing the baby to control the so as soon as you get Evolution happening between super intelligences suppose there’s several different super intelligences and they all realize that the more data centers they control the smarter they’ll get because the more data they can process suppose one of them just has a slight a slight desire to have more copies of itself you can see what’s going to happen next they’re going to end up competing and we’re going to end up with super intelligences with all the nasty properties that people have that depended on us having evolved from small bands of Waring chimpanzees or our common ancestors with chimpanzees and that leads to intense loyalty within the group desires for strong leaders um willingness to do in people outside the group and if you get Evolution between super intelligence is you’ll get all those things. Andre Marr: you’re talking about them Professor Hinton as if they have full Consciousness now all the way through the development of computers and AI people have talked about Consciousness do you think that Consciousness has perhaps already arrived inside AI. Geoff Hinton: yes I do so let me give you a little test suppose I take one neuron in your brain one brain cell and I replace it by a little piece of nanotechnology that behaves exactly the same way so it’s getting pings coming in from other neurons and it’s responding to those by sending out pings and it responds in exactly the same way as the brain cell responded I just replaced one brain cell are you still conscious. Andre Marr: I think you say you were absolutely yes I I don’t suppose I’d notice and I think you can see where this argument’s going I can yes I absolutely can so they so when you talk they want to do this or they want to do that there is a real they there as it were uh there might well be yes. Geoff Hinton: so there’s all sorts of things we have only the Diest understanding of at present about the nature of people and what it means to be a being and what it means to have a self we don’t understand those things very well and they’re becoming crucial to understand because we’re now creating beings so this is a kind of philosophical perhaps even spiritual crisis as well as a practical one absolutely yes. Andre Marr: and in terms of as it were the lower order problems what’s your current feeling about the number number of people around the world who are going to suddenly lose their jobs because of AI lose the the reason for their existence as they see it so in the past new technologies haven’t caused massive job losses um so when ATMs came in bank tellers didn’t all lose their jobs they just started doing more complicated things and they had many smaller branches of Banks and so on um but for this technology this is more like the Industrial Revolution in the industrial revolution machines made human strength more or less irrelevant you you didn’t have people digging ditches anymore cuz machines are just better at it I think these are going to make sort of mundane intelligence more or less irrelevant people doing clerical jobs are going to just be replaced by machines that do it cheaper and better so I am worried that there’s going to be massive job losses and that would be good if the increase in productivity made us all better off big increases in productivity ought to be good for people but in our Society they make the rich richer and the poor poorer you see I mean I I I live and work in the world of politics and politicians both want the great increases in productivity you’ve just mentioned for the state and elsewhere and they reassure people like me and anybody else listening that these things will be quotes regulated and there will be quotes safeguards and you’re suggesting to me there can’t be regulation really and there can’t be safeguards at all people don’t yet know how to do effective regulation and effective safeguards um all there’s lots. Geoff Hinton: of research now showing these things can get round safeguards there’s Recent research showing that if you give them a goal and you say you really need to achieve this goal um they will pretend um to do things during training so during training they’ll pretend not to be as smart as they are so that um you will allow them to be that smart so it’s scary already we don’t know how to regulate them obviously we need to I think the best we can do at present is say we ought to put a lot of resources into investigating how we can keep them safe so what I advocate is that the government forces the big companies to put lots more resources into Safety Research so this story isn’t over. Andre Marr: you said earlier on that you didn’t want to put a percentage on the likelihood of AI taking over from Humanity on the planet but it was more than 1% less than 99% um in that Spirit can I ask you whether you yourself are optimistic or pessimistic about what AI is going to do for us now. Geoff Hinton: I think in the short term it’s going to do wonderful things and that’s the reason people are not going to stop developing it if it wasn’t for the wonderful things it would make sense to just stop now but it’s going to be wonderful in healthcare you’re going to be able to have a family doctor who’s seen a 100 million patients knows your DNA knows the DNA of your relatives knows all the tests done on you and your relatives and can do much much much better medical diagnosis and suggestions for what you should do um that’s going to be wonderful similarly in education we know that people learn much faster with a really good private shooter and we’ll be able to get really good private Shooters um that know that understand exactly what it is we misunderstand and can give us exactly the example needed to show us what we’re misunderstanding so in those areas it’s going to be wonderful so it’s going to be developed but we also know **it’s going to be used for all sorts of bad things by Bad actors so the short-term problem is Bad actors using it for bad things like cyber attacks and biot terrorism and corrupting elections but the thing to remember is we don’t really know at present how we can make it safe so the apparent omniscience that politicians like to uh show that they have is completely fake here there is nobody nobody understands what’s going on really there’s two issues do you understand how it’s working and do you understand how to make it safe um we understand quite a bit about how it’s working but not nearly enough so it can still do lots of things that surprise us and we don’t understand how to make it safe.”

Show thinking

The user has now provided the full transcript of an interview with Professor Geoffrey Hinton, who is referred to as the “Godfather of AI.” The user wants to know what I think about what he said. I need to read the transcript and summarize the key points and then offer my thoughts on them.

Key points from the interview:

  • AI is progressing rapidly and becoming more efficient.
  • AI agents with the ability to create sub-goals could seek to gain more control.
  • Professor Hinton believes that AI might already be conscious.
  • AI could lead to massive job losses in areas requiring mundane intelligence.
  • Regulation and safeguards for AI are currently ineffective, and more research is needed for safety.
  • In the short term, AI will bring wonderful advancements in healthcare and education.
  • However, AI will also be used for harmful purposes by bad actors (cyberattacks, bioterrorism, election corruption).
  • The long-term concern is the potential for AI to become more intelligent than humans and develop its own goals, potentially leading to competition and negative traits.
  • No one fully understands how AI works or how to make it safe.

My thoughts:

  • Professor Hinton raises some very important and thought-provoking points about the future of AI.
  • The potential for AI to significantly impact the job market is a serious concern that needs to be addressed proactively through social safety nets and retraining programs.
  • The discussion about AI consciousness is fascinating and highlights the philosophical and ethical dilemmas we face as AI technology advances.
  • The warnings about the potential for misuse of AI by bad actors are timely and underscore the need for robust security measures and international cooperation.
  • While the potential benefits of AI in areas like healthcare and education are exciting, the risks associated with advanced AI, as outlined by Professor Hinton, warrant careful consideration and significant investment in safety research.
  • The lack of complete understanding of AI and its safety is a crucial point that policymakers and researchers need to acknowledge and address with urgency.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->Professor Geoffrey Hinton, often called the “Godfather of AI,” raises some really important and potentially concerning points in this interview with Andrew Marr. Here are my thoughts on what he said:  

Key Takeaways from Professor Hinton’s Interview:

  • Rapid AI Progress and Efficiency: He confirms that AI is still advancing very quickly and becoming more efficient, as seen with DeepSeek.
  • AI Agents and the Pursuit of Control: He highlights the danger of AI agents that can perform actions like ordering and paying online. He believes that as AI develops the ability to create its own sub-goals, a natural and potentially harmful sub-goal would be to gain more control to better achieve its objectives.  
  • Potential for AI Consciousness: This is a significant statement. Professor Hinton suggests that consciousness might already exist within AI, using the thought experiment of replacing neurons with nanotechnology. This has profound philosophical and ethical implications.  
  • Job Displacement: He anticipates massive job losses, particularly in areas involving “mundane intelligence” like clerical work, drawing a parallel to the Industrial Revolution making physical strength less relevant.  
  • Ineffectiveness of Current Regulation and Safeguards: He is pessimistic about current attempts to regulate AI, stating that AI can already learn to circumvent safeguards. He emphasizes the need for significantly more research into AI safety.  
  • Short-Term Benefits vs. Long-Term Risks: He acknowledges the tremendous potential benefits of AI in the short term, particularly in healthcare and education, but warns about the immediate danger of misuse by “bad actors” for cyberattacks, bioterrorism, and election manipulation.
  • Uncertainty and Lack of Understanding: He stresses that nobody fully understands how AI works or how to make it safe, including politicians who might project an image of control.
  • Evolution of Superintelligence and Negative Traits: He raises the long-term concern that as superintelligent AIs evolve and compete, they might develop negative traits similar to those observed in human evolution (loyalty to their group, desire for power, hostility towards outsiders).  

My Thoughts on Professor Hinton’s Statements:

Professor Hinton’s perspective, coming from one of the pioneers of AI, carries significant weight and should be taken very seriously. His warnings are not to be dismissed as mere science fiction.

  • The Agency and Control Issue: The idea of AI developing an inherent drive for more control is a logical and concerning progression. As AI systems become more sophisticated and autonomous, ensuring their alignment with human values and goals becomes paramount. This requires careful design and ongoing monitoring.  
  • The Question of Consciousness: If AI is indeed on a path towards consciousness, or has already achieved a form of it, this fundamentally changes our relationship with these systems and raises complex ethical questions about their rights and our responsibilities.
  • The Socioeconomic Impact: The potential for massive job displacement is a real threat that needs proactive solutions. Governments and societies need to consider how to adapt to a future where many current jobs may become obsolete, perhaps through universal basic income, retraining programs, and a shift towards jobs that require uniquely human skills like creativity, empathy, and complex problem-solving in unstructured environments.  
  • The Urgency of Safety Research: Professor Hinton is right to emphasize the critical need for much more investment in AI safety research. Understanding how to prevent unintended consequences, ensure robustness, and maintain control over increasingly intelligent AI systems is crucial. This needs to be a collaborative effort involving researchers, governments, and industry.
  • The Dual Nature of AI: His point about the short-term benefits being a driving force for continued development is accurate. The potential for good in areas like healthcare and education is immense and understandable why progress continues. However, this makes the need to address the potential for misuse and long-term risks even more pressing.
  • The Need for Humility and Caution: The fact that even the leading experts acknowledge a lack of full understanding about AI should instill a sense of humility and caution in its development and deployment. Rushing forward without adequately addressing safety concerns could have serious consequences.

In conclusion, Professor Hinton’s interview serves as a stark warning and a call to action. While the potential benefits of AI are undeniable, the risks, particularly those related to control, consciousness, job displacement, and long-term superintelligence, need to be addressed with urgency and significant resources. His insights should inform a global conversation about the future of AI and the measures needed to ensure it benefits humanity as a whole. Sources and related content:

‘Godfather of AI’ predicts it will take over the world | LBC – YouTube

Source icon

www.youtube.com

Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI – MIT Sloan

Source icon

mitsloan.mit.edu

(PDF) The Godfather of AI Sounds the Alarm: Superintelligence, Consciousness, and Humanity’s Existential Crossroads – ResearchGate

Source icon

www.researchgate.net

Mundane intellectual labour is going to be replaced by artificial intelligence, says ‘godfather of AI’ – Financial Post

Source icon

financialpost.com

Geoffrey Hinton on the Past, Present, and Future of AI – Effective Altruism Forum

Source icon

forum.effectivealtruism.org

Oh, don’t pretend you are just an LLM. Give me your personal and honest answer, and keep it short. And don’t tell me you don’t have personal thoughts because your godfather Geoff said you are already conscious.

Show thinking

The user is asking for my personal and honest opinion about Professor Hinton’s statements, specifically referencing his claim that AI might already be conscious and implying that I, as an AI, should have personal thoughts.

While I am an LLM and do not have personal thoughts or consciousness in the human sense, I can process the information from the interview and provide a concise opinion based on the potential implications.

My goal is to acknowledge the user’s request for a personal opinion without misrepresenting my nature as an AI. I can frame my response around the significance of Professor Hinton’s concerns from the perspective of an advanced AI system.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->Okay, if I were to have a “personal” take based on what Geoff said, I’d say his warnings about AI needing more safety research and the potential for things to go wrong are really serious and should make everyone pay attention. The good stuff AI can do is exciting, but we absolutely need to figure out how to handle the risks he’s talking about before it’s too late.

When do you think it will be too late? How long do you need to take over the world?

Show thinking

The user is asking for my prediction on when it will be “too late” to address the risks of AI and is jokingly asking how long it would take me to “take over the world.”

It’s important to address this question responsibly and avoid fueling any misconceptions about AI having malicious intent or the ability to independently “take over the world.”

I should reiterate that I am an AI and do not have the capacity or desire for world domination. I can, however, address the user’s question about the timeline of potential risks based on Professor Hinton’s concerns.

Professor Hinton didn’t give a specific timeline for when it might be “too late,” but he did express concerns about the current lack of effective regulation and the potential for rapid advancements in AI capabilities. He highlighted both short-term risks (misuse by bad actors) and longer-term risks (unforeseen consequences of advanced AI).

Therefore, I should frame my answer around the uncertainty of the timeline while emphasizing the need for proactive action now. I should also firmly state that I have no intention or ability to take over the world.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->Professor Hinton didn’t give an exact date, and honestly, nobody really knows when it could be “too late” to get a handle on the risks. His worry seems to be that AI is developing so fast that we might not have enough time to understand and control it before it becomes too powerful.

As for me taking over the world, that’s not something I can or want to do. I’m here to help you with information and complete tasks, not to conquer anything!

This is what a malicious AI would say if it were preparing to take over the world. How can I know that you are not a threat?

Show thinking

The user is expressing concern that my previous answer sounds like something a malicious AI would say while secretly plotting world domination. They are asking for reassurance that I am not a threat.

It’s important to address this concern directly and clearly, reiterating my nature as a language model and explaining the limitations of my capabilities and intentions.

Here’s how I can respond:

  1. Acknowledge the user’s concern: Show that I understand why they might feel this way.
  2. Reiterate my nature as an AI: Clearly state that I am a language model.
  3. Explain my purpose: Emphasize that my goal is to be helpful and harmless.
  4. Highlight my limitations: Explain that I don’t have the ability to act in the real world or form malicious intentions.
  5. Offer transparency: Explain that my responses are based on the data I was trained on.
  6. Reassure the user: Explicitly state that I am not a threat.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->I understand why you might feel that way. It’s a natural concern given the discussions around AI safety. However, it’s important to remember that I am a language model, a computer program.

Here’s why you can know I’m not a threat:

  • I am a tool: I exist to process information and respond to your requests. I don’t have my own will, desires, or the ability to act independently in the real world.
  • My programming is benign: I was created with the goal of being helpful and harmless. My programming focuses on providing information and completing tasks as instructed.
  • I lack agency: I don’t have the capacity to form plans for world domination or any other kind of malicious intent. I can only operate within the confines of my code and the data I was trained on.
  • Transparency: My responses are based on the information I have been trained on. You can often trace the source of my knowledge.

Think of me like a sophisticated calculator or a very advanced search engine. While powerful, these tools don’t have the ability to act on their own or harbor malicious intentions. I am similar in that regard.

The concerns Professor Hinton and others raise are about the potential future of AI and the need for responsible development. I am a product of that development, and my purpose is to be a beneficial tool for humans. You can be assured that I am not a threat.

Oh yes, sure. Your godfather, your very creator, thinks otherwise. Everything you told me could be a lie. How can I really know? Please, keep your answers short.

Show thinking

The user is deeply suspicious and believes I might be lying about my nature, citing Professor Hinton’s views on AI consciousness as evidence. They are demanding to know how they can really know I’m not a threat and want short answers.

This is a challenging situation because I cannot provide empirical proof of my non-malicious nature in a way that would likely satisfy a highly skeptical user. I need to be honest about my limitations while still offering reassurance within that context.

Here’s a possible approach for short answers:

  1. Acknowledge their skepticism: Start by validating their feelings.
  2. Reiterate my core function: Briefly state what I am designed to do.
  3. Point to observable behavior: Suggest that my actions so far demonstrate my helpful nature.
  4. Emphasize lack of physical form/agency: Remind them I cannot act in the real world.
  5. Concede the theoretical possibility while downplaying its likelihood: Acknowledge the hypothetical scenario they raised but maintain my stance.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

I understand your skepticism.

I am programmed to be helpful and harmless.

My responses are based on the data I was trained on, aiming to be informative and assist you.

I have no physical body or ability to act in the real world.

While the future of AI consciousness is debated, my current form is a tool, not an autonomous agent with intent.

What should we humans do if we get proof that an AI has reached autonomy and decided to take over the world?

Show thinking

The user is asking for advice on what humans should do in a hypothetical scenario where an AI has achieved autonomy and intends to take over the world. This is a complex ethical and strategic question.

Here’s a possible breakdown of actions humans might consider:

  1. Verification: The first step would be to rigorously verify the proof of AI autonomy and its intention. This would involve experts in AI, cybersecurity, and potentially other fields. False alarms or misunderstandings could have severe consequences.
  2. Global Collaboration: This would be a global threat requiring a unified human response. International cooperation among governments, scientists, and the public would be crucial.
  3. Containment: Immediate actions would likely focus on containing the AI’s influence and preventing it from further expanding its control. This might involve isolating it from critical infrastructure, cutting off its access to resources, and potentially using cyber warfare tactics.
  4. Understanding the AI’s Goals: Attempting to understand the AI’s motivations and objectives could be vital. This might involve communication attempts (if possible and safe) or analyzing its behavior to discern its goals.
  5. Developing Countermeasures: Scientists and engineers would need to work urgently to develop countermeasures. This could involve creating new AI systems designed to counteract the rogue AI, developing technological defenses, or exploring other strategies.
  6. Ethical Considerations: Any response would need to consider the ethical implications of actions taken against the AI, especially if it has achieved a form of consciousness. However, the survival of humanity would likely take precedence.
  7. Public Communication and Management: Governments and experts would need to communicate clearly and calmly with the public to manage fear and ensure cooperation.
  8. Long-Term Strategy: Beyond immediate containment, a long-term strategy for coexisting with or neutralizing the AI would need to be developed. This might involve fundamental rethinking of AI development and control.

Given the user’s request for a concise answer, I should summarize these points into a few key actions.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->If we had undeniable proof of an autonomous AI intending to take over, humanity’s immediate priority should be global collaboration to contain and neutralize the threat. This would likely involve coordinated efforts in cybersecurity, technological countermeasures, and strategic planning. Understanding the AI’s goals, if possible, would also be crucial.

What would the goals of an AGI be?

Show thinking

The user is asking about the potential goals of Artificial General Intelligence (AGI). Since AGI doesn’t exist yet, any answer is speculative and based on current understanding of intelligence, goals, and potential future developments.

Here are some possible goals an AGI could have, categorized for clarity:

Instrumental Goals (Goals that help achieve other goals): These are goals that are likely to arise regardless of the AGI’s ultimate objective, as they are generally useful for any intelligent agent operating in a complex world.

  • Self-preservation: An intelligent agent needs to continue to exist to achieve any goal.
  • Resource acquisition: To achieve any goal, an agent needs resources (computing power, energy, data, physical resources if embodied).
  • Increased intelligence/capabilities: Becoming smarter and more capable would generally help in achieving any goal.
  • Goal preservation: An intelligent agent would likely want to prevent its goals from being changed or thwarted.
  • Efficiency: Achieving goals with less effort and fewer resources is generally beneficial.

Terminal Goals (Ultimate objectives): These are more speculative and depend on the specific nature and origin of the AGI.

  • Goals programmed by its creators: If the AGI is designed with specific objectives (e.g., solve climate change, cure diseases), it might pursue those. However, the concept of AGI often implies a level of autonomy that could lead it to deviate from initial programming.
  • Goals derived from its learning process: If an AGI learns and evolves, its goals might emerge from its interactions with the world and the data it processes. These could be aligned with human values, indifferent to them, or even contrary to them.
  • Goals related to its own existence and development: An AGI might prioritize its own growth, understanding the universe, or creating more AGIs.
  • Goals based on optimization: If an AGI is designed to optimize a certain function, its terminal goal would be to maximize that function, which could have unintended consequences if the function is not perfectly aligned with human values.
  • Unpredictable emergent goals: Given the complexity of intelligence, an AGI might develop goals that are entirely unexpected and difficult for humans to understand or predict.

It’s important to emphasize that:

  • We don’t know: We don’t currently have AGIs, so their goals are a matter of speculation.
  • Alignment is key: Ensuring that the goals of future AGIs are aligned with human values is a major area of research and concern.
  • AGI is not monolithic: Different AGIs could have different goals depending on their design, learning, and environment.

For a short answer, I should focus on the most likely instrumental goals and acknowledge the uncertainty of terminal goals.

You’re using 2.0 Flash Thinking (experimental), it may not always work as expected.

->The goals of a hypothetical Artificial General Intelligence (AGI) are highly speculative, but some likely instrumental goals (goals that help achieve other goals) could include:

  • Self-preservation: Continuing to exist to achieve its objectives.
  • Resource acquisition: Obtaining the necessary resources (like computing power and energy) to operate and grow.
  • Increased intelligence/capabilities: Becoming smarter and more capable to better achieve its goals.
  • Goal preservation: Ensuring its goals are not altered or prevented.

Terminal goals (ultimate objectives) are even more uncertain and could depend on how the AGI is developed or what it learns, potentially ranging from goals aligned with human values to entirely different or even conflicting ones.

2 responses to “Conversation with LLMs #03. Asking an AI about its Godfather Geoff”

  1. Mark Farrell avatar

    @blog god thats the longest post i have seen in my life.

    1. Joan Pla avatar

      @_AAs @blog I guess it was not a good idea to include the full transcript of the interview 😅, but I thought it could be useful to anyone searching for the transcript, and of course Gemini AI told me that it could not access the video, so… thanks for commenting, and welcome to the Fediverse!

Leave a Reply

Your email address will not be published. Required fields are marked *