
Ronaldinho states that STAR10, a token named after his jersey number in soccer, is his new and official token, set to be launched on the BNB Chain. He has been involved in several crypto endeavors before and has even been questioned about his role in an alleged pyramid scheme that used his image for promotion.
Bitcoin.com
You can visit the page to read the article.
Source: Bitcoin.com
Disclaimer: The opinion expressed here is not investment advice – it is provided for informational purposes only. It does not necessarily reflect the opinion of BitMaden. Every investment and all trading involves risk, so you should always perform your own research prior to making decisions. We do not recommend investing money you cannot afford to lose.
Breaking: President Trump signs executive order to create strategic Bitcoin (BTC) reserve

The President of the United States – Donald Trump has officially made the first move Bitcoin.com

Alarming OpenAI AI Safety Claims: Ex-Lead Criticizes GPT-2 ‘Rewriting’
In the fast-paced world of cryptocurrency and blockchain, the undercurrent of artificial intelligence is becoming increasingly significant. As AI technologies like those developed by OpenAI mature, their potential impact on digital currencies and decentralized systems is undeniable. However, recent controversies surrounding AI safety practices at leading AI firms are raising eyebrows and sparking debate within the tech community and beyond. A prominent voice in this debate is Miles Brundage, former policy lead at OpenAI, who is now publicly challenging the company’s narrative on AI history and its approach to GPT-2 deployment. Let’s delve into the heart of this controversy and understand what it means for the future of AI development and its intersection with the crypto world. Why is OpenAI Accused of ‘Rewriting’ its AI History? The core of the issue lies in a recently published document by OpenAI outlining their current philosophy on AI safety and alignment. In this document, OpenAI suggests a shift in perspective regarding the development of Artificial General Intelligence (AGI). They now frame AGI as part of a “continuous path” of AI advancement, advocating for iterative deployment and learning from current AI systems. This contrasts with what they describe as a “discontinuous world” approach, which they claim they adopted for their earlier model, GPT-2 . According to OpenAI, in this discontinuous world, caution was paramount, leading them to treat systems like GPT-2 with “outsized caution.” Here’s a breakdown of OpenAI’s stated positions: Discontinuous World (GPT-2 Era): Characterized by treating early AI systems with significant caution due to perceived risks, even if those risks seemed disproportionate in hindsight. Continuous World (Current Philosophy): Views AGI development as a gradual progression. Emphasizes learning and iterative improvement through deployment of current systems to ensure the safety of future, more advanced AI. However, this narrative is being challenged by Miles Brundage, OpenAI’s former policy research head, who argues that OpenAI is misrepresenting the context of GPT-2 ‘s release and its alignment with their current “iterative deployment” philosophy. Miles Brundage’s Powerful Criticism: What’s the Real Story of GPT-2 and AI Safety? Miles Brundage , who was deeply involved in OpenAI’s GPT-2 release strategy, took to social media platform X to voice his concerns. He contends that the cautious and incremental release of GPT-2 was entirely consistent with, and even foreshadowed, OpenAI’s present-day iterative deployment approach. Brundage highlights that: GPT-2’s Incremental Release: The model was not fully released immediately. Instead, OpenAI opted for a phased rollout, sharing lessons and insights at each stage. Expert Support for Caution: Many security experts at the time applauded OpenAI’s cautious approach to GPT-2 , recognizing the potential risks associated with such powerful language models. Brundage firmly believes that the caution exercised during the GPT-2 release was justified given the information available at the time. He questions OpenAI’s current characterization of that period as belonging to a “discontinuous world” approach, arguing that it was, in fact, an early example of the iterative deployment strategy they now champion. The GPT-2 Context: Why Was There So Much AI Safety Concern? To understand Brundage’s perspective, it’s crucial to remember the environment surrounding GPT-2 in 2019. GPT-2 was a significant leap forward in AI text generation. It could perform tasks previously thought to be uniquely human, such as: Answering questions on a wide range of topics. Summarizing lengthy articles. Generating human-quality text that was, at times, indistinguishable from human writing. Despite seeming less sophisticated by today’s standards, GPT-2 was groundbreaking at the time. OpenAI , acknowledging the potential for misuse, initially chose not to release the full source code, citing risks of malicious applications like generating fake news or spam. This decision, while debated, underscores the genuine AI safety concerns prevalent at the time. What are Brundage’s Fears About OpenAI’s Current Stance on AI Safety? Miles Brundage worries that OpenAI’s recent document is designed to shift the burden of proof regarding AI safety concerns. He fears that OpenAI is attempting to create an environment where: Concerns are Dismissed as Alarmist: Legitimate worries about AI safety might be downplayed or labeled as exaggerated. High Evidence Threshold: Action on AI safety would only be taken when there is “overwhelming evidence of imminent dangers.” Brundage argues that this mentality is “very dangerous,” especially as AI systems become increasingly advanced and potentially impactful. He questions OpenAI’s motives for “poo-pooing caution” and wonders if it signals a shift towards prioritizing rapid product releases over comprehensive AI safety measures. Are Competitive Pressures Affecting OpenAI’s AI Safety Priorities? There’s a growing narrative that competitive pressures in the AI industry might be influencing OpenAI ‘s approach to AI safety . OpenAI has faced accusations in the past of prioritizing “shiny products” and rushing releases to outpace competitors. This pressure has only intensified with the rise of competitors like Chinese AI lab DeepSeek, whose R1 model has reportedly matched OpenAI’s o1 model in certain benchmarks. Adding to the pressure, OpenAI CEO Sam Altman has admitted that DeepSeek has narrowed OpenAI’s technological lead. Furthermore, reports suggest OpenAI is facing significant financial losses, projected to potentially triple to $14 billion annually by 2026. In this context, a faster product release cycle could offer short-term financial benefits but might compromise long-term AI safety considerations. The Bigger AI Safety Debate: Balancing Innovation and Responsibility The disagreement between Miles Brundage and OpenAI highlights a fundamental tension in the AI field: balancing rapid innovation with responsible development and deployment. Experts like Brundage are raising critical questions about whether the current race to dominate the AI market is overshadowing crucial AI safety protocols. The concerns are not just about technical safeguards but also about the broader ethical and societal implications of increasingly powerful AI systems. As the crypto and blockchain space increasingly integrates AI, these AI safety debates become even more relevant. The security and reliability of decentralized systems could be profoundly affected by the underlying AI technologies they utilize. Therefore, understanding and engaging with the discussions around AI history , AI safety , and responsible development is crucial for anyone involved in the future of digital currencies and beyond. Conclusion: A Critical Juncture for AI Safety and OpenAI’s Path Forward The critique from Miles Brundage serves as a stark reminder that the path to advanced AI is not just about technological breakthroughs but also about navigating complex ethical and AI safety landscapes. As OpenAI and other AI leaders continue to shape the future of this transformative technology, the debate over AI history , deployment strategies, and the prioritization of AI safety will remain central. The crypto community, with its inherent focus on security and decentralization, has a vested interest in ensuring these discussions are robust and lead to responsible AI innovation. To learn more about the latest AI safety trends, explore our article on key developments shaping AI features. Bitcoin.com