🔵 #Can BTC Break $110K?#
Bitcoin recently broke above $107,000 and is currently trading around $105,000, just shy of its all-time high at $109,580. Do you think Bitcoin can set a new record and push past $110,000? Share your analysis and predictions with us!
🔵 #AI Token Market Cap Rebounds#
According to CoinGecko, the total market cap of the AI agent sector has rebounded to $6.862 billion, with a 1.2% increase in the past 24 hours. Notably, VIRTUAL surged 18.5%, and AI16Z rose 7.1%. Which AI tokens are you bullish on? How are you planning your portfolio strategy? Let’s hear your thoughts!
Full text of OpenAI CEO's first speech in China: Will try to make a GPT-5 model, but not soon
On June 10, OpenAI founder Sam Altman appeared at the 2023 Zhiyuan Artificial Intelligence Conference held in Beijing, China via video link. This was the first time Altman gave a speech to a Chinese audience.
In his speech, Altman quoted the "Tao Te Ching" and talked about the cooperation between major countries, saying that AI security begins with a single step, and cooperation and coordination between countries must be carried out.
Then Altman accepted a one-on-one Q&A with Zhang Hongjiang, chairman of Zhiyuan Research Institute.
Dr. Zhang Hongjiang is currently the chairman of Beijing Zhiyuan Artificial Intelligence Research Institute, and also serves as an independent director and consultant for several companies. He was the executive director and CEO of Kingsoft Group and the CEO of Kingsoft Cloud. He was one of the founders of Microsoft Asia Research Institute. ) Dean and Microsoft "Distinguished Scientist".
Before joining Microsoft, Zhang Hongjiang was the manager of Hewlett-Packard Labs in Silicon Valley, USA; before that, he also worked at the Institute of Systems Science, National University of Singapore.
Altman speech core content
The reason why the current AI revolution is so impactful is not only the scale of its impact, but also the speed of its progress. This brings both dividends and risks.
With the advent of increasingly powerful AI systems, the importance of global cooperation has never been greater. In some important events, countries must cooperate and coordinate. Advancing AGI safety is one of the most important areas where we need to find common interests.
Alignment is still an open issue. It took GPT-4 eight months to work on the alignment. However, related research is still being upgraded, mainly divided into two aspects: scalability and interpretability.
Q&A core content
Humans will have powerful artificial intelligence systems (AI) within ten years.
OpenAI has no relevant new open source timeline. The open source model has advantages, but open source everything may not be the best route (to promote the development of AI).
It's much easier to understand a neural network than a human brain.
At some point, will try to do a GPT-5 model, but not anytime soon. I don't know when the specific GPT-5 will appear.
AI security requires the participation and contribution of Chinese researchers.
Note: "AI alignment" is the most important issue in AI control issues, that is, the goal of the AI system is required to be aligned (consistent) with human values and interests.
Sam Altman Speech Full Text
With the advent of increasingly powerful artificial intelligence systems, the stakes for global cooperation have never been higher.
If we're not careful, a misplaced AI system designed to improve public health outcomes could disrupt the entire healthcare system by providing unfounded recommendations. Likewise, AI systems designed to optimize agricultural production may inadvertently deplete natural resources or damage ecosystems due to a lack of consideration of the long-term sustainability that affects food production, an environmental balance.
I hope we can all agree that advancing AGI safety is one of the most important areas where we need to work together and find commonalities.
The rest of my presentation will focus on where we can start: 1. The first area is AGI governance, AGI has fundamentally become a powerful force for change in our civilization, emphasizing the need for meaningful international cooperation and coordination. Everyone benefits from a collaborative governance approach. If we navigate this path safely and responsibly, AgI systems can create unparalleled economic prosperity for the global economy, address common challenges such as climate change and global health security, and enhance social well-being.
I also deeply believe in the future. We need to invest in AGI safety to get where we want to be and enjoy it there.
To do this we need careful coordination. This is a global technology with global reach. The cost of accidents caused by reckless development and deployment will affect us all.
In international cooperation, I think there are two key areas that are the most important.
First of all, we need to establish international norms and standards, and pay attention to inclusiveness in the process. The use of AGI systems in any country should follow such international standards and norms equally and consistently. Within these safety fences, we believe that people have ample opportunity to make their own choices.
Second, we need international cooperation to verifiably build international trust in the safe development of increasingly powerful AI systems. I have no delusions that this is an easy task that requires a great deal of dedicated and sustained attention.
The Tao Te Ching tells us: A journey of a thousand miles begins with a single step. We believe that the most constructive first step in this regard is to cooperate with the international scientific and technological community.
What needs to be emphasized is that we should increase the mechanism of transparency and knowledge sharing in promoting technological progress. When it comes to AGI safety, researchers who uncover emerging safety issues should share their insights for the greater good.
We need to think hard about how we can respect and protect intellectual property while encouraging this norm. If we do this, then it will open new doors for us to deepen our cooperation.
More broadly, we should invest in promoting and leading research into AI alignment and safety.
At Open AI, our research today focuses on technical issues that allow AI to play a helpful and safer role in our current systems. This may also mean that we train ChatGPT in such a way that it does not make threats of violence or assist users in harmful activities.
But as we move closer to the age of AGI, the potential impact and size of impact of unaligned AI systems will grow exponentially. Proactively addressing these challenges now minimizes the risk of catastrophic outcomes in the future.
For the current system, we mainly use reinforcement learning with human feedback to train our model to be a helpful security assistant. This is just one example of various post-training adjustment techniques. And we're also working hard on new technologies, which require a lot of hard engineering work.
From the time GPT4 finishes pre-training until we deploy it, we dedicate 8 months to the alignment work. Overall, we think we're doing a good job here. GPT4 is more human-aligned than any of our previous models.
However, alignment remains an open problem for more advanced systems, which we argue requires new technical approaches with enhanced governance and oversight.
For future AGI systems, it proposes 100,000 lines of binary code. Human supervisors are unlikely to find out if such a model is doing something nefarious. So we're investing in some new, complementary research directions that we hope will lead to breakthroughs.
One is scalable supervision. We can try to use AI systems to assist humans in overseeing other AI systems. For example, we can train a model to help human supervision find flaws in the output of other models.
The second is interpretability. We wanted to try to better understand what's going on inside these models. We recently published a paper using GPT-4 to interpret neurons in GPT-2. In another paper, we use Model Internals to detect when a model is lying. We still have a long way to go. We believe that advanced machine learning techniques can further improve our ability to explain.
Ultimately, our goal is to train an AI system to aid in alignment studies. The beauty of this approach is that it can scale with the speed of AI development.
Reaping the extraordinary benefits of AGI while mitigating the risks is one of the seminal challenges of our time. We see great potential for researchers in China, the US, and around the world to work together towards the same goal and work hard to solve the technical challenges posed by AGI alignment.
If we do this, I believe we will be able to use AGI to solve the world's most important problems and greatly improve the quality of human life. Thank you so much.
Dialogue Record
We will have a very powerful AI system in the next ten years
**Zhang Hongjiang, chairman of Zhiyuan Research Institute, asked: **How far are we from general artificial intelligence (AGI)? Is the risk urgent, or are we far from it?
Sam Altman: It's hard to gauge when. It is very likely that we will have very powerful AI systems in the next decade, and new technologies will fundamentally change the world faster than we think. In that world, I think it is important and urgent to get this thing (AI safety rules) right, which is why I call on the international community to work together.
In a sense, the acceleration and systemic impact of new technologies that we are seeing now is unprecedented. So I think it's about being prepared for what's coming and being aware of the safety concerns. Given the sheer scale of AI, the stakes are significant.
In your opinion, in the field of AGI security, what are the advantages of different countries to solve this problem, especially to solve AI safety issues. How can these strengths be combined?
Global cooperation to propose safety standards and frameworks for AI
Zhang Hongjiang: You mentioned several times in the previous introduction just now that global cooperation is underway. We know that the world has faced considerable crises in the past. Somehow, for many of them, we managed to build consensus, build global cooperation. You are also on a global tour, what kind of global collaborations are you trying to promote?
Sam Altman: Yes, I am very pleased with everyone's reactions and answers so far. I think people are taking the risks and opportunities of AGI very seriously.
I think the security discussion has come a long way in the last 6 months. There seems to be a real commitment to figuring out a structure that allows us to enjoy these benefits while working together globally to reduce the risks. I think we're very well suited to do this. Global cooperation is always difficult, but I see it as an opportunity and a threat to bring the world together. It would be very helpful if we could come up with a framework and security standards for these systems.
How to solve the alignment problem of artificial intelligence
Zhang Hongjiang: You mentioned that the alignment of advanced artificial intelligence is an unsolved problem. I also noticed that OpenAI has put a lot of effort in the past few years. You mentioned that GPT-4 is by far the best example in the field of alignment. Do you think we can solve AGI's safety problems just by fine-tuning (API)? Or is it much more difficult than the way this problem is solved?
Sam Altman: I think there are different ways of understanding the word alignment. I think what we need to solve is the challenge in the whole artificial intelligence system. Alignment in the traditional sense-making the behavior of the model match the user's intention is certainly part of it.
But there will also be other issues, like how do we verify what the systems are doing, what we want them to do, and how do we adjust the value of the system. It is most important to see the overall picture of AGI safety.
Everything else is difficult without a technical solution. I think it's really important to focus on making sure we address the technical aspects of security. As I mentioned, figuring out what our values are is not a technical question. Although it requires technical input, it is an issue worthy of in-depth discussion by the whole society. We must design systems that are fair, representative and inclusive.
Zhang Hongjiang: For alignment, what we see GPT-4 is still a solution from a technical point of view. But there are many other factors besides technology, often systemic. AI safety may be no exception here. Besides the technical aspects, what are the other factors and issues? Do you think it is critical to AI safety? How should we respond to these challenges? Especially since most of us are scientists. What should we do.
Sam Altman: This is certainly a very complex question. But everything else is difficult without a technical solution. I think it's really important to focus on making sure we address the technical aspects of security. As I mentioned, figuring out what our values are is not a technical question. It requires technical input, but it is also an issue worthy of in-depth discussion by the whole society. We must design systems that are fair, representative and inclusive.
And, as you point out, we need to think about the safety of not just the AI model pull itself, but the system as a whole.
Therefore, it is important to be able to build safe classifiers and detectors that run on systems that monitor AI for compliance with usage policies. I think it is difficult to predict in advance all the problems that will arise with any technology. So learn from real world use and deploy it iteratively to see what happens when you actually create reality and improve it.
It's also important to give humans and societies time to learn and update, and to think about how these models will interact with their lives in good and bad ways.
Requires the cooperation of all countries
**Zhang Hongjiang:**Earlier, you mentioned global cooperation. You have been traveling around the world. China, the United States and Europe are the driving forces behind the innovation of artificial intelligence. In your opinion, in AGI, what are the advantages of different countries to solve the AGI problem, especially to solve the problem of human safety? question. How can these strengths be combined?
Sam Altman: I think a lot of different perspectives and AI safety are generally required. We don't have all the answers yet, and this is a rather difficult and important question.
Also, as mentioned, it's not a purely technical question to make AI safe and beneficial. Involves understanding user preferences in different countries in very different contexts. We need a lot of different inputs to make this happen. China has some of the best AI talent in the world. Fundamentally, I think the best minds from around the world are needed to address the difficulty of aligning advanced AI systems. So I really hope that Chinese AI researchers can make great contributions here.
A very different architecture is required to make AGI safer
**Zhang Hongjiang: **Follow-up questions about GPT-4 and AI safety. Is it possible that we need to change the whole infrastructure or the whole architecture of the AGI model. To make it more secure and easier to check.
Sam Altman: It's entirely possible that we do need some very different architectures, both from a functional perspective and from a security perspective.
I think we're going to be able to make some progress, good progress in explaining the capabilities of our various models right now, and getting them to better explain to us what they're doing and why. But yeah, I wouldn't be surprised if there's another giant leap after Transformer. We've changed a lot of architecture since the original Transformer.
The Possibility of OpenAI Open Source
Zhang Hongjiang: I understand that today’s forum is about AI safety, because people are very curious about OpenAI, so I have a lot of questions about OpenAI, not just about AI safety. I have an audience question here, is there any plan for OpenAI to re-open source its models like it did before version 3.0? I also think open source is good for AI safety.
Sam Altman: Some of our models are open source and some are not, but as time goes on, I think you should expect us to continue to open source more models in the future. I don't have a specific model or timeline, but it's something we're discussing right now.
Zhang Hongjiang: BAAI makes all efforts open source, including models and algorithms themselves. We believe that we have this need, sharing and giving that you perceive as the one they are in control. Do you have similar ideas, or have these topics been discussed among your peers or colleagues at OpenAI.
Sam Altman: Yes, I think open source does have an important role in a way.
There have also been a lot of new open source models emerging recently. I think the API model also has an important role. It provides us with additional security controls. You can block certain uses. You can block certain types of tweaks. If something doesn't work, you can take it back. At the scale of the current model, I'm not too worried about that. But as the model becomes as powerful as we expect it to be, if we're right about it, I think open sourcing everything might not be the best path, although sometimes it's right. I think we just have to balance it carefully.
We will have more open source large models in the future, but there is no specific model and timetable.
The next step for AGI? Will we see GPT-5 soon?
Zhang Hongjiang: As a researcher, I am also curious, what is the next direction of AGI research? In terms of large models, large language models, will we see GPT-5 soon? Is the next frontier in embodied models? Is autonomous robotics an area that OpenAI is or plans to explore?
Sam Altman: I'm also curious about what's going to happen next, one of my favorite things about doing this work is that there's a lot of excitement and surprise at the cutting edge of research. We don't have the answers yet, so we're exploring many possible new paradigms. Of course, at some point, we will try to do a GPT-5 model, but not anytime soon. We don't know when exactly. We've been working on robotics since the very beginning of OpenAI, and we're very interested in it, but we've had some difficulties. I hope one day we can go back to this field.
Zhang Hongjiang: Sounds great. You also mentioned in your presentation how you use GPT-4 to explain how GPT-2 works, making the model more secure. Is this approach scalable? Is this direction OpenAI will continue to advance in the future?
Sam Altman: We will continue to push in this direction.
Zhang Hongjiang: Do you think this method can be applied to biological neurons? Because the reason why I ask this question is that there are some biologists and neuroscientists who want to borrow this method to study and explore how human neurons work in their field.
Sam Altman: It's much easier to see what's going on on an artificial neuron than on a biological neuron. So I think this approach is valid for artificial neural networks. I think there is a way to use more powerful models to help us understand other models. But I'm not quite sure how you would apply this approach to the human brain.
Is it feasible to control the number of models
**Zhang Hongjiang: **OK, thank you. Now that we've talked about AI safety and AGI control, one of the questions we've been discussing is, would it be safer if there were only three models in the world? It's like nuclear control, you don't want nuclear weapons to proliferate. We have this treaty where we try to control the number of countries that can get this technology. So is controlling the number of models a feasible direction?
Sam Altman: I think there are different opinions on whether it is safer to have a minority model or a majority model in the world. I think it's more important, do we have a system where any robust model is adequately tested for safety? Do we have a framework where anyone who creates a sufficiently robust model has both the resources and the responsibility to ensure that what they create is safe and aligned?
Zhang Hongjiang: At this meeting yesterday, Professor Max of the MIT Future of Life Institute mentioned a possible method, which is similar to the way we control drug development. When scientists or companies develop new drugs, you cannot directly market them. You have to go through this testing process. Is this something we can learn from?
Sam Altman: I definitely think we can learn a lot from the licensing and testing frameworks that have been developed in different industries. But I think fundamentally we've got something that can work.
Zhang Hongjiang: Thank you very much, Sam. Thank you for taking the time to attend this meeting, albeit online. I'm sure there are many more questions, but given the time, we have to stop here. I hope that next time you have the opportunity to come to China, come to Beijing, we can have a more in-depth discussion. thank you very much.