• Digitize Dispatch
  • Posts
  • πŸ—ž OpenAI Considers Responsibly Allowing AI-Generated Explicit Content Amid Concerns

πŸ—ž OpenAI Considers Responsibly Allowing AI-Generated Explicit Content Amid Concerns

Key AI Developments from the Last 24 Hours

Hello, enthusiasts! 🌟 Digitize Dispatch brings you the latest, most impactful AI news, cutting through the noise. No filler, just the updates driving the future of AI.

πŸ”Ž The Latest on the AI Frontier:

  • OpenAI Considers Responsibly Allowing AI-Generated Explicit Content 🚨

  • OpenAI's Preferred Publishers Program: Partnering with News Media πŸ“°

  • Elon Musk's xAI Startup Nears $18 Billion Valuation πŸ’°

  • AlphaFold 3: Revolutionizing Molecular Structure Prediction 🧬

  • TikTok Leads in Labeling AI-Generated Content πŸŽ₯

  • Microsoft Deploys Secretive AI Platform for U.S. Intelligence Agencies πŸ•΅οΈ

  • Stack Overflow Faces Backlash Over OpenAI Partnership and User Bans 🚫

  • AI Ethicists Call for Regulation of "Deadbots" πŸ’€

🚨 OpenAI, the company behind ChatGPT, is exploring ways to "responsibly" allow users to create sexually graphic content using its AI tools, according to a document outlining the future use of its technology. Read More

  • Currently, OpenAI's rules mostly ban sexually explicit or suggestive content, but the company is now re-evaluating this strict prohibition to start a conversation about whether erotic text and nude images should always be banned in its AI products.

  • The debate comes amid concerns about the rise of "nudify" apps and deepfake porn, which researchers worry could be used to harass, blackmail, or embarrass victims, particularly among teens.

  • While OpenAI stresses that any changes would serve content in an age-appropriate context and block non-consensual sexual images and videos, experts caution that relaxing the NSFW policy could lead to potential harm outweighing the benefits for educational and artistic uses.

πŸ“° OpenAI has been pitching partnership opportunities to news publishers through its Preferred Publishers Program (PPP), offering select benefits such as priority placement, richer brand expression, and licensed financial terms. Read More

  • The PPP is available only to "select, high-quality editorial partners" and aims to help ChatGPT users more easily discover and engage with publishers' brands and content.

  • Financial incentives for participating publishers include a guaranteed licensing payment for allowing OpenAI to access their data and a variable payment based on the number of users engaging with linked or displayed content.

  • In return, OpenAI gains the ability to train on and display a publisher's content with attribution and links, and can announce the publisher as a preferred partner, while publishers may see larger payments if more users engage with their links as the browse function becomes more widely used.

πŸ’° Elon Musk's AI startup, xAI, is expected to reach an $18 billion valuation after closing its current funding round, which could raise $6 billion and includes Sequoia Capital as an investor. Read More

  • xAI's pitch deck highlights Musk's other businesses, Tesla and SpaceX, and suggests the possibility of training the AI using data from Musk's social media platform, X (formerly Twitter).

  • Musk announced xAI in July, assembling a team of AI talent from DeepMind, OpenAI, Google Research, Microsoft Research, and other companies with expertise in large language models (LLMs) and machine learning.

  • In March, Musk made xAI's rival to ChatGPT, the AI chatbot assistant Grok-1, open-source, following a lawsuit against OpenAI and its CEO, Sam Altman, alleging a betrayal of the company's founding commitment to benefiting humanity over profit.

🧬 AlphaFold 3, an AI model developed by Google DeepMind and Isomorphic Labs, can predict the structure and interactions of all life's molecules with unprecedented accuracy, doubling prediction accuracy for some important categories of interaction compared to existing methods. Read More

  • AlphaFold 3 models large biomolecules such as proteins, DNA, RNA, and small molecules like ligands, as well as chemical modifications that control cellular function and disease.

  • With a 50% improvement in predicting drug-like interactions over traditional methods, AlphaFold 3 is the first AI system to surpass physics-based tools for biomolecular structure prediction.

  • The free and easy-to-use AlphaFold Server allows scientists to harness AlphaFold 3's power to model complex molecular structures, accelerating workflows and enabling further innovation.

πŸŽ₯ TikTok becomes the first video-sharing platform to automatically label AI-generated content using Content Credentials technology, aiming to combat misinformation and help users distinguish between fact and fiction. Read More

  • Content Credentials, described as a "nutrition label for content," provide the ability to trace the origin of different types of media, including information about the image's creation, location, creator, and edits made.

  • OpenAI, Adobe, and Microsoft are among the companies already using Content Credentials to embed metadata into visual content created using their AI platforms, with plans to expand to video content in the future.

  • TikTok's global rollout of AI content labels begins today, with plans to attach Content Credentials to content in the coming months, allowing other platforms to read the metadata when content is downloaded.

πŸ•΅οΈ Microsoft has deployed a secretive generative AI platform for U.S. intelligence agencies, providing a tool that allows spies to safely analyze sensitive data using AI models without connecting to the internet. Read More

  • The AI tool, based on GPT-4, is the first major large language model (LLM) fully separated from the internet, deployed in an "air-gapped" cloud environment accessible only by the U.S. government.

  • Structured to read files without learning from them or the broader internet, the platform aims to prevent sensitive national security information from leaking into publicly-accessible models.

  • πŸ”’ With potential access for about 10,000 members of the intelligence community, Microsoft's AI tool is currently in a testing and accreditation phase before broader use by agencies like the CIA.

🚫 Stack Overflow, a popular forum for programmers and developers, has faced backlash from users after announcing a partnership with OpenAI to use the site's content to train ChatGPT, leading to mass bans for users who deleted or edited their answers in protest. Read More

  • Many users are removing or editing their questions and answers to prevent them from being used to train AI, with one user, Ben, reporting that his account was suspended for 7 days after changing his highest-rated answers to a protest message.

  • Users are questioning why ChatGPT cannot simply cite the sources of the answers it provides, which would reveal how the AI models are trained and may not align with the promise of a super-smart generative AI assistant.

  • Stack Overflow's rapid policy change, from prohibiting AI-generated content to embracing it as a "big opportunity," has further outraged users who disagree with having their content scraped by ChatGPT, despite the site's Terms of Service granting irrevocable ownership of user-provided content.

πŸ’€ AI ethicists from the University of Cambridge are calling for urgent regulation of "deadbots," digital recreations of deceased individuals, warning of potential psychological harm and disrespect to the rights of the deceased. Read More

  • Rapid advancements in generative AI have made it possible for nearly anyone to "revive" a deceased loved one as a chatbot, but this poses ethical concerns, particularly when financial motives of digital afterlife services may encroach on the dignity of the deceased.

  • The use of "deadbots" by children to cope with loss may be particularly damaging, as there is little evidence to suggest such an approach is beneficial and could potentially disrupt the normal mourning process.

  • To preserve the dignity of the dead and the wellbeing of the living, researchers suggest best practices, such as sensitively "retiring" deadbots, limiting interactive features to adults, and ensuring transparency about the limitations of artificial systems, which may require regulation to enforce.

That's a wrap for today's AI news! Stay tuned for more updates, and remember, with AI's rapid evolution, the future is not just about technologyβ€”it's about how we adapt and innovate. Until next time! πŸš€πŸ’‘

Have any feedback? Send us an email