From 202 items, 40 important content pieces were selected
- Ghostty Terminal Emulator Leaves GitHub ⭐️ 8.0/10
- OpenAI Models Land on AWS Bedrock ⭐️ 8.0/10
- GitHub Enterprise Server RCE Vulnerability CVE-2026-3854 ⭐️ 8.0/10
- Claude.ai and API Major Outage ⭐️ 8.0/10
- NVIDIA Releases Nemotron 3 Nano Omni Open Multimodal AI Model ⭐️ 8.0/10
- Google Expands Pentagon AI Access After Anthropic Refusal ⭐️ 8.0/10
- Cordon: MCP Security Gateway with Human-in-the-Loop Approvals ⭐️ 8.0/10
- Qwen Open-Sources FlashQLA: 2-3x Faster Linear Attention Kernel ⭐️ 8.0/10
- Hugging Face Transformers v5.7.0 Adds Laguna MoE and DEIMv2 Models ⭐️ 7.0/10
- llama.cpp b8964 Fixes Reasoning Budget Re-Arm Bug ⭐️ 7.0/10
- Reflections on Pre-GitHub Version Control Era ⭐️ 7.0/10
- Google’s 2026 Android Sideloading Restrictions Spark Controversy ⭐️ 7.0/10
- Who Owns the Code Claude Code Wrote ⭐️ 7.0/10
- Warp Terminal Emulator Now Open-Source ⭐️ 7.0/10
- Localsend: Open-Source Cross-Platform AirDrop Alternative ⭐️ 7.0/10
- Waymo Expands to Portland Amid Transit Budget Crisis ⭐️ 7.0/10
- Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived ⭐️ 7.0/10
- Anthropic Connects Claude to Photoshop, Blender, and Ableton ⭐️ 7.0/10
- Musk and Altman Go to Court: The OpenAI Trial Begins ⭐️ 7.0/10
- Google-Pentagon AI Deal Allows ‘Any Lawful’ Military Use ⭐️ 7.0/10
- Talkie-1930: 13B LLM Trained on Pre-1931 English Text ⭐️ 7.0/10
- FIDO Alliance, Google, Mastercard Develop AI Agent Security Standards ⭐️ 7.0/10
- Bloomberg Terminal Getting AI Chatbot Overhaul ⭐️ 7.0/10
- Talkie: 13B Vintage Language Model from 1930 ⭐️ 7.0/10
- Neural Ledger System: Persistent AI Context Across Sessions ⭐️ 7.0/10
- HIC: Same-Ring Isolation OS Kernel Achieves 4ns IPC ⭐️ 7.0/10
- Nvidia Executive: AI Costs More Than Human Workers ⭐️ 7.0/10
- OpenAI Models, Codex, and Managed Agents Come to AWS ⭐️ 7.0/10
- Anthropic Joins Blender Development Fund as Corporate Patron ⭐️ 7.0/10
- Limits of Self-Improving LLMs: No Singularity Without Symbolic Synthesis ⭐️ 7.0/10
- Using Snowflake Cortex Agents with Structured Data ⭐️ 7.0/10
- Grafana Refactors Loki with Kafka, Releases CLI Tool for AI Agents ⭐️ 7.0/10
- ClickHouse Full-Text Search on Object Storage ⭐️ 7.0/10
- Cloudflare Sandboxes GA: Persistent Isolation for AI Agents ⭐️ 7.0/10
- China Adds 38 New Undergraduate Majors Including Embodied AI ⭐️ 7.0/10
- CAC Cracks Down on CapCut, Dreamina AI Over AI Content Labeling Failures ⭐️ 7.0/10
- Google Signs Classified AI Deal with Pentagon Worth $200M ⭐️ 7.0/10
- LiteLLM SQL Injection Allows Unauthenticated API Key Theft ⭐️ 7.0/10
- Warp Open-Sources Client Code Base for AI-Powered Terminal Development ⭐️ 7.0/10
- NVIDIA Releases Nemotron 3 Super: 120B Open-Weight MoE Model for Multi-Agent AI ⭐️ 7.0/10
Ghostty Terminal Emulator Leaves GitHub ⭐️ 8.0/10
Mitchell Hashimoto announced that Ghostty, the fast feature-rich terminal emulator he created, is leaving GitHub. In an emotional blog post, he reflected on his complicated relationship with the platform and the changes he’s observed since GitHub was acquired by Microsoft. This departure has sparked rich debate about GitHub’s deterioration as a platform and the ethics of proprietary code hosting. The discussion highlights broader themes of platform loyalty versus ethical concerns in open source software development, resonating with many developers who have noticed GitHub’s declining service quality. Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration. The project has gained significant traction in the developer community and was hosted on GitHub until this departure.
hackernews · WadeGrimridge · Apr 28, 19:44
Background: Ghostty is a terminal emulator created by Mitchell Hashimoto, also known for developer tools like Packer and Terraform. GitHub is the largest code hosting platform, acquired by Microsoft in 2018. The discussion about GitHub’s decline includes concerns about resource allocation to Copilot instead of core services, organizational changes, and the unofficial status page tracking ongoing issues.
References
Discussion: The community response mixes empathy for Hashimoto’s genuine emotions with criticism about his late stance on proprietary software concerns - some compare it to a Richard Stallman-esque attitude. Others joke about hiring Hashimoto as GitHub’s CEO to turn the platform around, acknowledging his strong convictions and vision. Overall, there’s widespread agreement that GitHub is facing serious organizational issues.
Tags: #open-source, #GitHub, #dev-tools, #terminal-emulator, #platform-governance
OpenAI Models Land on AWS Bedrock ⭐️ 8.0/10
OpenAI has partnered with AWS to bring their GPT models to Amazon Bedrock, marking a significant diversification from their exclusive partnership with Microsoft Azure. The models are now available through Amazon Bedrock’s fully managed service. This partnership represents a major competitive shift in the enterprise AI cloud market. Organizations that have been hesitant to adopt OpenAI due to privacy concerns or lack of trusted deployment options can now access GPT models through AWS, a cloud provider they already trust and have existing contracts with. This could significantly expand OpenAI’s reach into regulated industries like finance and healthcare. Amazon Bedrock offers a serverless experience for quick deployment, allowing enterprises to experiment with foundation models without managing infrastructure. The service already hosts models from Anthropic, Cohere, and AI21 Labs, and now adds OpenAI to its portfolio. However, models on different inference platforms may perform differently due to quantization, custom silicon, batching, and other inference optimizations.
hackernews · Hacker News - OpenAI / Anthropic / Gemini / DeepSeek · Apr 28, 19:24
Background: Amazon Bedrock is a fully managed service offered by AWS that provides enterprises with high-performing foundation models from leading AI companies. A foundation model is an AI model trained on vast datasets that can be applied across a wide range of use cases, serving as the building block for generative AI applications. The serverless nature of Bedrock allows organizations to quickly experiment with different models without worrying about infrastructure management.
Discussion: The discussion reveals that many privacy-conscious organizations had adopted Anthropic’s models through Bedrock earlier because they trusted AWS as an intermediary, while OpenAI was banned in some enterprises due to trust concerns. Commenters note that OpenAI appears to be in a catch-up position and wonder if models on different inference platforms might produce non-deterministic results, potentially affecting development consistency.
Tags: #openai, #aws, #bedrock, #cloud-ai, #partnership
GitHub Enterprise Server RCE Vulnerability CVE-2026-3854 ⭐️ 8.0/10
Wiz discovered a critical Remote Code Execution (RCE) vulnerability CVE-2026-3854 in GitHub Enterprise Server using their AI-augmented reverse engineering methodology, identifying a severe flaw in widely-used enterprise code hosting infrastructure. This vulnerability represents a watershed moment in AI-powered security research, demonstrating how LLM agents can dramatically accelerate vulnerability discovery. The alarming slow patching rate—88% of instances still vulnerable after 7 weeks—raises serious concerns about enterprise security postures and the practical challenges of maintaining on-premises software. The vulnerability affects GitHub Enterprise Server versions prior to 3.19.3. The patch was released on March 10, 2026, yet Wiz’s data shows 88% of on-premises GHES instances remain unpatched as of late April 2026—7 weeks after the fix became available.
hackernews · bo0tzz · Apr 28, 16:15
Background: GitHub Enterprise Server (GHES) is an on-premises version of GitHub’s code hosting platform designed for enterprises requiring data sovereignty or custom deployment options. CVE (Common Vulnerabilities and Exposures) is a database of publicly disclosed security flaws. Remote Code Execution (RCE) is the most severe class of vulnerability as it allows attackers to run arbitrary commands on affected systems. AI-augmented security research uses large language models trained on code to accelerate understanding of complex system internals and vulnerability discovery.
References
Discussion: 安全研究人员赞扬Wiz的AI增强方法,认为它展示了LLM在代码分析中的核心优势——能够快速理解复杂的系统内部结构,而这些如果由人工完成需要更长的时间。一些评论者表示担忧,88%的客户在7周后仍未打补丁似乎慢得惊人,而另一些人则质疑考虑到企业软件安全的复杂性,GitHub的替代方案是否会更好。
Tags: #security-vulnerability, #remote-code-execution, #github-enterprise, #ai-security-research, #cve-2026-3854
Claude.ai and API Major Outage ⭐️ 8.0/10
Anthropic’s Claude.ai website and API experienced a significant multi-hour outage, leaving users unable to access the AI assistant or make API calls. The incident triggered widespread disruption for both individual developers and enterprise customers. This outage is significant because enterprise users paying $200,000+ per month are experiencing only “one nine” (~90%) reliability over 90 days. The combination of frequent outages and poor support response has executive teams furious, raising serious concerns about Claude’s viability as a primary coding tool. According to community reports, the Claude service has struggled with reliability issues for months. Long-term users report that the model frequently produces subtly wrong, incomplete, or unworkable outputs, requiring constant relearning of optimal prompting techniques.
hackernews · shorsher · Apr 28, 18:01
Background: Anthropic Claude is a family of AI assistants developed by Anthropic, a prominent AI safety company backed by major tech companies including Amazon and Google. Claude.ai is the web interface for accessing Claude, while the API allows developers to integrate Claude into their applications.
Discussion: Enterprise users express strong dissatisfaction, with one $200K/month customer reporting their executive team is furious about the combination of outages and poor support. Some users recommend adopting multi-model strategies (using Anthropic, Codex, and Gemini together) to mitigate vendor lock-in risks when any single LLM goes down.
Tags: #anthropic, #claude, #llm-services, #infrastructure, #reliability, #developer-tools
NVIDIA Releases Nemotron 3 Nano Omni Open Multimodal AI Model ⭐️ 8.0/10
NVIDIA today unveiled Nemotron 3 Nano Omni, an open multimodal AI model that unifies vision, speech (audio), and language processing into a single system for AI agents, delivering up to 9x greater efficiency compared to systems using separate models. This model addresses a critical bottleneck in AI agent systems, where separate models for vision, speech and language create delays and context loss during data transfer. By unifying these capabilities, developers can build faster, more responsive AI agents that reason across screens, documents, audio, video and text within a single perception-to-action loop. Nemotron 3 Nano Omni is released as an open model, enabling developers to freely access and integrate it into their applications. The 9x efficiency gain is achieved by eliminating the overhead of routing data between separate specialized models and maintaining a unified context throughout the processing pipeline.
rss · NVIDIA Blog · Apr 28, 16:00
Background: AI agents are systems that use large language models (LLMs) as their core to autonomously reason, plan, and take actions based on user prompts. Traditional AI agent systems often juggle separate models for different modalities - one for image recognition, another for speech processing, and a third for language understanding - which creates latency and loses context as data passes between models. The agent loop concept refers to the continuous cycle of perception, reasoning, action and feedback that enables AI systems to adapt their decisions in real time.
References
Tags: #multimodal AI, #NVIDIA, #AI agents, #LLMs, #machine learning
Google Expands Pentagon AI Access After Anthropic Refusal ⭐️ 8.0/10
Google has signed a new contract with the Pentagon to expand the Department of Defense’s access to its AI technology, following Anthropic’s refusal to allow DoD use of its AI for domestic mass surveillance and autonomous weapons applications. This development raises critical questions about AI ethics and corporate responsibility in the military AI sector. It highlights the growing tension between tech companies’ moral obligations and government defense demands, potentially setting a precedent for which companies comply with military AI requests and which refuse. The contract follows Anthropic’s explicit refusal to allow DoD use of its AI for domestic mass surveillance and autonomous weapons, creating a market opportunity for Google. This marks a significant shift in the competitive landscape of military AI contracts among major tech companies.
rss · TechCrunch AI · Apr 28, 18:15
Background: This news occurs within the broader context of ongoing debates about AI ethics in military applications. Major tech companies have faced increasing scrutiny over whether their AI technologies should be used for surveillance and autonomous weapons systems. Google has previously faced employee protests related to Project Maven, a military AI project, while Anthropic has taken a more restrictive stance on military AI applications.
Tags: #AI ethics, #military AI, #Google, #Anthropic, #Pentagon
Cordon: MCP Security Gateway with Human-in-the-Loop Approvals ⭐️ 8.0/10
Cordon is an open-source MCP gateway that acts as a transparent proxy between LLM clients and MCP servers, adding granular security policies (allow, block, approve, read-only, log-only) to every tool call, with synchronous human-in-the-loop approvals that pause agents until users approve or deny specific tool calls in real-time. This matters because MCP currently has no security model—an agent either has full unrestricted access or is completely blocked, with no middle ground. Cordon provides the critical missing piece: a practical way to enforce human oversight on sensitive tool calls without disabling the agent entirely, preventing unbounded agent access to production systems, databases, and APIs. Cordon installs via npx cordon-cli init in about two minutes and auto-patches Claude Desktop config. It works with Claude Desktop, Claude Code, Cursor, Windsurf, and any stdio MCP client. The human-in-the-loop approval can be triggered via terminal prompt or Slack Block Kit message, showing exact arguments for decision-making. All decisions are logged, and the gateway runs locally as a fully offline CLI, with an optional hosted dashboard for centralized audit logs.
rss · Hacker News - Show HN · Apr 28, 22:39
Background: MCP (Model Context Protocol) is an open standard introduced by Anthropic in November 2024 that enables LLMs to integrate with external tools, databases, file systems, and APIs. Currently, MCP has no built-in security model—the specification assumes trust, leaving developers with an all-or-nothing choice where agents either have full admin access or no access at all.
References
Discussion: The HN discussion is minimal with only 1 comment point, indicating limited engagement. The approach seems well-received as a practical solution addressing a real security gap in the MCP ecosystem, though specific community concerns or counterarguments are not visible in the available comments.
Tags: #ai-safety, #mcp, #security, #llm-agents, #human-in-the-loop
Qwen Open-Sources FlashQLA: 2-3x Faster Linear Attention Kernel ⭐️ 8.0/10
Qwen team has open-sourced FlashQLA, a high-performance linear attention kernel library built on TileLang, designed for Gated Delta Networks. Through operator fusion and algebraic optimization, it achieves 2-3x forward and 2x backward speedup on NVIDIA Hopper GPUs. 此版本解决了长上下文和边缘部署场景下的关键LLM效率挑战。线性注意力将标准O(N²)注意力复杂度降低到O(N),这对于处理更长序列同时保持合理计算成本至关重要。2-3倍的速度提升直接提高了预训练效率,并使边缘端智能体推理更加实用。 FlashQLA leverages the gated decay properties of Gated Delta Networks to enable automatic intra-chip context parallelism, specifically optimized for long sequences and small batch sizes. It employs warpgroup-specialized kernels to overlap computation and data transfer, improving SM utilization. The library is particularly suited for pre-training and edge agent inference scenarios.
telegram · zaihuapd · Apr 28, 14:11
Background: Linear attention is an efficient alternative to standard softmax attention in transformers, reducing computational complexity from O(N²) to O(N). Gated Delta Network (Gated DeltaNet) is a linear attention variant adopted by Qwen3-Next and Kimi Linear family, using gating for adaptive memory control and delta updates for precise memory modifications. TileLang is a domain-specific language designed for streamlined development of high-performance GPU kernels. NVIDIA Hopper is the latest GPU architecture featuring advanced capabilities for AI workloads.
References
Tags: #linear-attention, #llm-efficiency, #qwen, #nvidia-hopper, #tilelang, #inference-optimization
Hugging Face Transformers v5.7.0 Adds Laguna MoE and DEIMv2 Models ⭐️ 7.0/10
Hugging Face transformers v5.7.0 was released with two new model integrations: Poolside’s Laguna mixture-of-experts language model family and DEIMv2, a real-time object detection model extending DEIM with DINOv3 features. This release is significant as it brings two specialized models to the widely-used transformers library, enabling ML practitioners to access Poolside’s innovative MoE architecture with auxiliary-loss-free load balancing and DEIMv2’s state-of-the-art object detection capabilities ranging from ultra-lightweight to large models. Laguna introduces per-layer head counts allowing different decoder layers to have different query-head counts while sharing the same KV cache shape, and uses a sigmoid MoE router with learned per-expert bias for load balancing without auxiliary losses. DEIMv2 offers eight model sizes (X to Atto), with DEIMv2-X achieving 57.8 AP using only 50.3M parameters and DEIMv2-S being the first sub-10M model to exceed 50 AP on COCO.
github · vasqu · Apr 28, 18:32
Background: Mixture-of-Experts (MoE) is a technique that dynamically routes inputs to specialized expert sub-networks to improve model quality while managing computational cost. SwiGLU is an activation function combining Swish and Gated Linear Units that provides smoother transitions around zero for better optimization. DEIMv2 extends DETR (Detection Transformer) with improved matching and DINOv3 features for real-time object detection.
References
Tags: #huggingface, #transformers, #machine-learning, #mixture-of-experts, #NLP
llama.cpp b8964 Fixes Reasoning Budget Re-Arm Bug ⭐️ 7.0/10
llama.cpp release b8964 fixes a bug where the reasoning budget was not re-armed after entering DONE state, causing subsequent think blocks to run unbudgeted on multi-think models like Qwen3.6-27B-GGUF. This fix is significant for users leveraging extended reasoning in multi-block models, as it ensures proper budget allocation for each think block. Without the fix, models would consume unlimited tokens beyond the intended reasoning budget. The bug occurred because DONE state absorbs all tokens including a new start tag, preventing proper budget re-arming. The fix advances the start_matcher in the DONE branch and re-arms to COUNTING with a fresh budget on match. A regression test (test-reasoning-budget: test 6) has been added.
github · github-actions[bot] · Apr 28, 20:07
Background: Reasoning budget is a feature that limits how many “thinking” tokens a model can generate before it must produce its final answer. GGUF (GPT-Generated Unified Format) is a file format designed for storing and running LLMs efficiently on consumer hardware, created by the llama.cpp project. Qwen3 is a model series that supports interleaving multiple think blocks per response, requiring proper budget re-arming for each block.
References
Tags: #llama.cpp, #GGUF, #bug-fix, #reasoning-budget, #Qwen3
Reflections on Pre-GitHub Version Control Era ⭐️ 7.0/10
A Hacker News discussion reflects on the pre-GitHub version control era, exploring how GitHub’s person-centric model reduced friction but potentially atrophied collective archival skills, with debate on Fossil vs Git merits. This discussion highlights how GitHub fundamentally transformed open source development by making repository creation personal rather than project-bound, while raising concerns about centralized archival potentially weakening our collective ability to preserve software history. Key points include the liberation of creating repos under personal names vs serious project naming on SourceForge, Fossil’s integrated wiki/forum/tickets in one file, Trac’s massive friction (Django still uses it after 20+ years), and concerns that centralized archival atrophies distributed preservation skills.
hackernews · mlex · Apr 28, 21:17
Background: Before GitHub, SourceForge required serious project registration including name reservation, CVS/SVN setup, websites, mailing lists, and issue tracking. Trac was popular but required significant effort to set up. Git was created in 2005 and GitHub launched in 2008, introducing a simpler person-centric model that made repository creation feel liberating.
References
Discussion: Commenters praise GitHub’s person-centric model as liberating compared to SourceForge’s heavy process, lament Git winning over Fossil despite Fossil’s useful integrated tools, raise concerns about centralized archival atrophying skills, and reminisce about Trac’s friction with Django as a lasting example.
Tags: #version-control, #git, #github, #software-history, #developer-tools
Google’s 2026 Android Sideloading Restrictions Spark Controversy ⭐️ 7.0/10
Google announced that starting September 2026, Android will block sideloaded apps from developers who haven’t registered with Google, verified their identity, paid a $25 fee, and submitted their signing keys. The company is offering an ‘advanced flow’ as an opt-out option. This represents a dramatic shift from Android’s traditional openness and places the platform closer to Apple’s walled-garden approach, affecting users who value running their own code on personal devices and developers who distribute apps outside Google Play. The verification process requires developers to provide government identification, upload app signing keys, and pay the $25 fee. There is an opt-out called the ‘advanced flow’ that allows users to sideload apps with additional security warnings. The restrictions apply to certified Android devices worldwide.
hackernews · doener · Apr 28, 15:21
Background: Sideloading refers to installing apps directly from APK files outside of Google Play, a core feature distinguishing Android from iOS. Android’s openness has been a key differentiator, allowing users to run custom apps and developers to distribute software without going through Google’s centralized approval. This policy change marks the most significant restriction on Android’s open ecosystem in its history.
References
Discussion: Comments show divided opinions: some users view this as a betrayal of Android’s founding promise and are considering switching to iOS, while others point out that the ‘advanced flow’ provides an opt-out. Critics argue this represents vendor lock-in unacceptable even on desktop PCs, while supporters note similar security measures exist elsewhere.
Tags: #android, #google, #open-source, #user-rights, #ecosystem
Who Owns the Code Claude Code Wrote ⭐️ 7.0/10
This article explores legal ownership of AI-generated code written by Claude Code (Anthropic’s AI coding assistant), examining the US Copyright Office’s January 2025 ruling that works predominantly generated by AI without meaningful human authorship are not eligible for copyright protection, and the Supreme Court’s March 2026 decision to decline hearing the Thaler appeal. This matters because it affects thousands of developers and companies using AI coding assistants, determining whether AI-generated code can be copyrighted, licensed under open source terms, or falls into public domain—directly impacting OSS licensing validity and intellectual property rights in software development. Key details include that the US Copyright Office requires human authorship for copyright eligibility (even detailed text prompts are insufficient), the Supreme Court’s certiorari denial does not conclusively settle the issue nationwide, and some community members argue AI agents build upon ‘stolen IP’ enabling potential copyright ‘washing’ in OSS.
hackernews · senaevren · Apr 28, 11:24
Background: The human authorship requirement is a cornerstone of US copyright law—copyright law assumes a human author who can sign documents, inherit property, and assign rights. Copyleft licensing is a form of open source licensing requiring that derivative works be distributed under the same license as the original. The Thaler case involved a researcher (Stephen Thaler) attempting to register AI-generated works without human authors.
References
Discussion: Comments reveal divided opinions: one user argues the Supreme Court denial doesn’t settle the issue (certiorari denial has many reasons unrelated to merits), another expresses concern about ‘copyright washing’ in OSS and suggests using strongest copyleft licensing, while a skeptical voice claims ownership will ultimately go to ‘people with money’ if the case reaches courts.
Tags: #AI-copyright, #legal-tech, #Claude-Code, #open-source-licensing, #IP-law
Warp Terminal Emulator Now Open-Source ⭐️ 7.0/10
Warp, a popular AI-powered terminal emulator, has announced it is now open-source. The VC-funded startup says opening the codebase will allow the community to contribute improvements and accelerate product development. 这标志着风险投资支持的开发者工具竞争方式发生了重大转变。通过开源,Warp希望利用社区贡献来超越资金充裕的闭源竞争对手,同时通过提供最优秀的产品来建立可持续的业务。 Warp is a Rust-based terminal emulator with built-in AI features including AI command completion and code editing capabilities. The company acknowledges it cannot compete on price or massively subsidize usage, so open-sourcing serves as a strategic acceleration mechanism.
hackernews · meetpateltech · Apr 28, 15:58
Background: Warp is an AI-powered terminal emulator that stands out from traditional terminals like iTerm2 and Windows Terminal by integrating AI-assisted features directly into the terminal experience. It was founded as a VC-funded startup and has been positioned as a next-generation developer tool.
Discussion: Community reactions are mixed. Some support the business strategy, appreciating that open-sourcing enables community participation. Others request a lightweight version without AI features, citing concerns about the 850 MB installation size and redundancy with existing tools like Claude Code. A developer noted building a similar Rust-based terminal with an embedded agent.
Tags: #open-source, #developer-tools, #terminal, #startups, #AI-tools
Localsend: Open-Source Cross-Platform AirDrop Alternative ⭐️ 7.0/10
Localsend is an open-source, cross-platform file sharing application that allows users to transfer files between devices on the same local network without requiring an internet connection or cloud service. This matters because it provides a free, privacy-focused alternative to Apple’s AirDrop that works across different operating systems, but its requirement for devices to be on the same local network is a significant limitation compared to AirDrop’s ability to create ad-hoc connections. Localsend requires all devices to be connected to the same local area network (LAN) to discover and transfer files. Alternative solutions like Sendme use the Iroh protocol library to enable true peer-to-peer file transfer over the internet with NAT traversal, without requiring a central server or existing LAN connection.
hackernews · bilsbie · Apr 28, 11:54
Background: AirDrop is Apple’s proprietary file sharing feature that works across Apple devices and can create temporary local networks for ad-hoc file transfer, making it useful even when outdoors or without a pre-existing Wi-Fi network. Localsend, as an open-source alternative, relies on traditional local network discovery which means devices must first be connected to the same Wi-Fi network. The Iroh protocol library provides modular peer-to-peer connectivity tools that can bypass these limitations through hole punching and relay fallbacks.
References
Discussion: Users appreciate Localsend’s reliability compared to AirDrop but highlight a critical limitation: it requires devices to be on the same LAN, unlike AirDrop which can create its own network. One user recommends Sendme and AltSendme (built on Iroh) as true P2P solutions that work without geographic limits. Another suggests Pairdrop as a browser-based alternative that can connect beyond local networks using public rooms. The general sentiment is that existing solutions fail the “no existing Wi-Fi network required” criterion.
Tags: #open-source, #file-sharing, #AirDrop, #cross-platform, #local-network
Waymo Expands to Portland Amid Transit Budget Crisis ⭐️ 7.0/10
Waymo announces expansion to Portland, Oregon, entering a market where the local transit authority TriMet is facing a $300M budget shortfall with staff layoffs, reduced service frequency, and eliminated bus lines. This expansion highlights the contrasting trajectories of autonomous vehicles and public transit in the US — while traditional transit services face severe cuts due to funding crises, AV companies continue to expand into new markets. Waymo uses a LIDAR-first geofenced strategy, unlike Tesla’s vision-only approach to full self-driving. The expansion comes as Oregon Republicans have placed a payroll tax repeal on the ballot, which could further worsen TriMet’s budget situation.
hackernews · xnx · Apr 28, 18:08
Background: Waymo is an autonomous vehicle company owned by Alphabet (Google’s parent company). TriMet serves the Portland metropolitan area with buses, light rail (MAX), and streetcars. Portland has a diverse network of streetcars and trams concentrated in its downtown core.
Discussion: Comments discuss the irony of AV expansion during transit cuts, with users sharing positive Waymo experiences after trying it. Others dream of combining Rivian vehicles with Waymo tech, while some raise concerns about Waymo vehicles potentially getting stuck on light rail tracks in Portland.
Tags: #autonomous-vehicles, #waymo, #public-transit, #portland, #urban-mobility
Into the Omniverse: Manufacturing’s Simulation-First Era Has Arrived ⭐️ 7.0/10
NVIDIA announces manufacturing’s transition from traditional design-build-test cycles to simulation-first approaches through their Omniverse platform.
rss · NVIDIA Blog · Apr 28, 13:00
Tags: #manufacturing, #simulation, #NVIDIA Omniverse, #industrial tech, #digital twin
Anthropic Connects Claude to Photoshop, Blender, and Ableton ⭐️ 7.0/10
Anthropic released a set of connectors that allow Claude to integrate directly with popular creative software including Adobe Creative Cloud apps, Affinity, Blender, Ableton, and Autodesk, marking the company’s latest effort to break into the creative industry following its launch of Claude Design earlier this month. This integration significantly expands Claude’s utility for creative professionals by enabling AI assistance directly within mainstream desktop creative workflows—potentially transforming how designers, 3D artists, and musicians interact with tools like Photoshop, Blender, and Ableton. The connectors follow Anthropic’s launch of Claude Design on April 17, 2026, which was positioned as a collaborative workspace for creating visual designs, prototypes, slides, and one-pagers. The new connectors enable Claude to tap into the actual creative applications rather than just operating as a standalone design tool.
rss · The Verge AI · Apr 28, 16:49
Background: Claude Design is Anthropic’s dedicated AI design product launched in April 2026, aimed at enabling rapid prototyping and collaboration on visual work. The new creative connectors represent Anthropic’s expansion beyond standalone AI tools into direct integration with established creative software ecosystems that professionals use daily.
References
Tags: #AI integration, #Anthropic Claude, #creative tools, #product launch, #workflow automation
Musk and Altman Go to Court: The OpenAI Trial Begins ⭐️ 7.0/10
The high-profile legal trial between Elon Musk and OpenAI is officially underway. The case centers on disputes over the early development of AI, including claims about who deserves credit and financial compensation for founding OpenAI. This trial will likely expose previously confidential details about OpenAI’s founding and governance decisions. The outcome could set important precedents for how AI companies handle intellectual property, funding arrangements, and governance structures in the future. The trial is expected to last several weeks and involve testimony from key figures in the AI industry. The dispute involves claims about OpenAI’s transition from a nonprofit to a profit-focused model and allegations regarding broken promises about open-sourcing AI technology.
rss · The Verge AI · Apr 28, 14:47
Background: Elon Musk was one of the original co-founders of OpenAI in 2015 but left the organization in 2018. The lawsuit claims that OpenAI’s transformation into a profit-making company, particularly its partnership with Microsoft, violated its founding mission of developing AI for the benefit of humanity. Musk has alleged that the company’s current direction betrays the original agreement among its founders.
Tags: #AI Industry, #OpenAI, #Elon Musk, #Legal, #Tech Governance
Google-Pentagon AI Deal Allows ‘Any Lawful’ Military Use ⭐️ 7.0/10
Google has signed a classified agreement with the US Department of Defense that permits the Pentagon to use its AI models for “any lawful government purpose,” according to The Information. The deal was reported less than a day after Google employees demanded CEO Sundar Pichai block the Pentagon from using its AI technology. This deal is significant because it directly contradicts employee demands for ethical AI use and raises serious questions about corporate responsibility in military AI applications. The timing—reported immediately after employee protests—suggests Google prioritized the Pentagon contract over internal ethical concerns, potentially setting a precedent for tech-military collaborations despite workforce opposition. The agreement is classified, meaning specific details about which AI models are involved and their intended applications remain undisclosed. The broad language of “any lawful government purpose” could encompass a wide range of military and defense applications, from intelligence analysis to autonomous weapons systems.
rss · The Verge AI · Apr 28, 11:09
Background: This deal is not Google’s first involvement with the Pentagon; previously, the company participated in Project Maven, a AI drone targeting project that sparked employee protests in 2018, leading to the company not renewing the contract. The current deal appears to expand AI collaboration with the military despite past employee concerns about AI being used for lethal purposes. Google employees have historically opposed military applications of their work, arguing that AI should not be weaponized.
Discussion: This news has generated significant controversy given the timing—reported immediately after employee protests demanding Google block the Pentagon from using its AI. The community is likely to view this as a disregard for employee ethical concerns and a prioritization of government contracts over responsible AI development.
Tags: #AI ethics, #Google, #Pentagon, #tech industry, #military AI
Talkie-1930: 13B LLM Trained on Pre-1931 English Text ⭐️ 7.0/10
Researchers led by Nick Levine, David Duvenaud, and Alec Radford have built Talkie-1930, a 13 billion parameter open-weight LLM trained exclusively on English text published before 1931. The model has no knowledge of modern events like the internet, smartphones, or World War II. This creates a unique ‘time capsule’ model that allows researchers to study historical reasoning and model generalization in controlled conditions, free from modern knowledge contamination. It provides a novel testbed for understanding how LLMs reason about history and generalize across temporal distribution shifts. The model is trained exclusively on pre-1931 English text corpora, creating a knowledge cutoff at 1931. As an open-weight model, researchers can freely access its weights for study and experimentation. The 13B parameter scale makes it substantial enough for meaningful research while remaining tractable for analysis.
rss · MarkTechPost · Apr 28, 02:24
Background: Knowledge cutoff is a fundamental concept in LLM development - it marks the point in time up to which a model’s training data extends. Modern LLMs typically have cutoffs between 2021-2024, meaning they lack knowledge of events after that date. Distribution shift between training and test data is a key challenge in ML research, where patterns learned on historical data may not apply to future scenarios. Talkie-1930 formalizes this by creating an extreme temporal cutoff that can be systematically studied.
Tags: #large language models, #historical AI research, #model generalization, #train-test gap, #AI interpretability
FIDO Alliance, Google, Mastercard Develop AI Agent Security Standards ⭐️ 7.0/10
The FIDO Alliance has partnered with Google and Mastercard to develop security standards for AI agents that will soon make purchases on users’ behalf, addressing concerns about uncontrolled autonomous spending. This matters because AI agents are evolving from simple assistants to autonomous systems capable of making real purchases, creating new security risks for consumers and businesses. Without proper standards, users could face significant financial losses from AI agents spending beyond their intentions. The initiative builds on FIDO Alliance’s existing authentication standards, including FIDO2 (passwordless MFA). The standards aim to ensure AI agents can only make purchases within user-defined limits and with proper authorization, addressing the emerging reality of autonomous AI spending.
rss · WIRED AI · Apr 28, 13:00
Background: The FIDO Alliance is an open industry association founded in 2013 that develops authentication standards to reduce reliance on passwords, known for FIDO2 standards introduced in 2018. AI agents are software programs that perform tasks autonomously using machine learning, and autonomous AI spending has become a production reality over the past eighteen months with agents purchasing API credits, cloud resources, and SaaS tools.
References
Tags: #AI agents, #cybersecurity, #FIDO Alliance, #digital payments, #AI safety
Bloomberg Terminal Getting AI Chatbot Overhaul ⭐️ 7.0/10
Bloomberg Terminal, the iconic financial data platform used by traders and financial professionals worldwide, is being overhauled with AI chatbot features. WIRED spoke with Bloomberg’s chief technology officer about these major changes coming to the platform. This represents a fundamental shift in how financial professionals interact with market data and execute trading decisions. With AI integration, millions of Bloomberg Terminal users could experience a completely new way of querying data, analyzing markets, and managing their workflow. The AI chatbot features will allow users to interact with the terminal using natural language queries, potentially replacing traditional keyboard shortcuts and menu navigation. This overhaul marks one of the most significant platform transformations in Bloomberg Terminal’s history.
rss · WIRED AI · Apr 28, 08:30
Tags: #fintech, #AI, #Bloomberg Terminal, #trading, #finance
Talkie: 13B Vintage Language Model from 1930 ⭐️ 7.0/10
Talkie is a newly released 13B parameter language model trained on 260B tokens of historical pre-1931 English text, developed by Nick Levine, David Duvenaud, and Alec Radford (known for GPT, GPT-2, Whisper). The project includes a base model and an instruction-tuned chat model, both released under Apache 2.0 license. This project represents a unique ‘vintage’ AI assistant that speaks in early 20th century language patterns, offering researchers a novel tool to explore historical language usage and test whether models trained on older data can predict future events or even rediscover past scientific discoveries like General Relativity. The base model (53.1 GB) was trained entirely on out-of-copyright pre-1931 text, while the chat model (26.6 GB) required synthetic data generation using Claude Sonnet 4.6 and Claude Opus 4.6 to improve instruction-following abilities, which introduces potential anachronistic contamination.
rss · Simon Willison · Apr 28, 02:47
Background: The training data uses pre-1931 text because US copyright cutoff is currently January 1, 1931, making this data legally free to use without licensing concerns. This connects to the ‘vegan model’ concept - LLMs trained entirely on licensed or out-of-copyright data. The project also explores fascinating research questions like whether a model trained up to 1911 could independently discover General Relativity.
Tags: #language-models, #historical-nlp, #ai-experiments, #huggingface, #vintage-english
Neural Ledger System: Persistent AI Context Across Sessions ⭐️ 7.0/10
Developer Umbecanessa built the Neural Ledger System (NLS), which captures and persists the model’s K/V states (and recurrent states for hybrid models like Qwen3.5-MoE) after each turn to disk, then re-injects them into the cache on the next session — solving context loss without re-sending the conversation history. Provisional patent US 64/050,345 was filed. 这解决了AI编码代理(Cursor、OpenCode、Aider、Claude Code等)在跨会话时丢失上下文的主要痛点——遗忘SSH地址、重新询问数据库密码。它还解决了长时运行代理的经济低效问题,因为目前每轮都重新发送完整对话历史,导致成本随对话长度叠加。NLS在测试中实现了99.3%的提示符令牌节省。 NLS was validated across three settings: standard conversational recall (5/5), LongMemEval benchmark (8/18 identical to text-based prompts), and a real agentic loop with OpenCode achieving 99.3% token savings on recall path. Caveats: single-GPU validation only, plugin source is proprietary (patent pending), provisional patent filed with non-utility/PCT planned within 12 months.
rss · Hacker News - Show HN · Apr 28, 20:22
Background: AI coding agents powered by transformer models rely on Key/Value (K/V) states in the attention mechanism to maintain context. Conventional architecture re-sends the full conversation every turn, causing costs to compound with length and forcing context loss when sessions restart. The neural ledger approach persists these computed states across sessions, allowing the model to behave as if it had full context without re-computation or re-transmission.
References
Discussion: No comments yet. The HN post has 1 point and 0 comments as of processing time.
Tags: #AI coding agents, #context management, #developer tools, #attention states, #software architecture
HIC: Same-Ring Isolation OS Kernel Achieves 4ns IPC ⭐️ 7.0/10
HIC is a capability-secure OS kernel that isolates system services inside Ring 0 using same-ring isolation, achieving 4ns IPC with just 3 instructions (call, bt, jmp) while enforcing isolation via MMU or segment descriptors on 8086 without any cross-domain privilege switches. 这种设计消除了内核介导的进程间通信和特权切换的性能开销,这是传统微内核设计中的根本瓶颈。它证明了强大的安全隔离可以在不牺牲性能的情况下实现,可能会影响嵌入式和安全关键系统的操作系统内核架构研究。 Each service occupies a separate physical address range enforced by the MMU. Callers cannot name business logic addresses because no mapping exists in their page tables—the hardware simply refuses to produce that address. On 8086, the caller’s LDT contains no descriptor for the business logic segment, so hardware refuses the far jump. The entire architecture and critical instruction sequences are documented in the repository.
rss · Hacker News - Show HN · Apr 28, 18:56
Background: Capability security is a model where access to objects is granted through unforgeable tokens called capabilities, rather than by direct object references. In x86 architecture, Ring 0 is the highest privilege level where kernel code executes. The MMU (Memory Management Unit) enforces memory boundaries between address spaces. On 8086, segment descriptors in the LDT (Local Descriptor Table) replace page tables to define memory access rights.
Tags: #operating-systems, #capability-security, #kernel-design, #low-level, #security
Nvidia Executive: AI Costs More Than Human Workers ⭐️ 7.0/10
An Nvidia executive stated that the cost of AI systems exceeds the cost of employing human workers, challenging the common assumption that AI adoption leads to cost savings for businesses. This statement is significant because many companies are investing heavily in AI expecting lower labor costs and efficiency gains. If AI is actually more expensive than human workers, it could fundamentally change how businesses approach AI adoption and workforce planning. The perspective comes from an Nvidia executive, a company that is a major provider of AI chips and hardware. Given Nvidia’s central role in the AI ecosystem, their analysis of AI economics carries significant weight in the industry.
rss · Hacker News - AI / LLM / Agent · Apr 28, 22:15
Background: Traditionally, companies have adopted AI to reduce labor costs and improve efficiency, assuming that automated systems would be cheaper than human workers over time. Nvidia, as a leading AI hardware provider, has unique visibility into the actual costs of running AI systems at scale, including computational resources, energy consumption, and maintenance.
Discussion: The news received relatively low engagement (10 points, 6 comments), suggesting limited community interest or that the statement was seen as not entirely surprising given existing concerns about AI costs. The low engagement may indicate that industry insiders already question whether AI truly delivers cost savings.
Tags: #AI economics, #Nvidia, #labor costs, #AI adoption, #industry perspective
OpenAI Models, Codex, and Managed Agents Come to AWS ⭐️ 7.0/10
OpenAI announces that their AI models, the Codex coding agent, and Managed Agents are now available on AWS through Amazon Bedrock. Customers can build with OpenAI models using AWS’s existing infrastructure, security controls, identity systems, and procurement processes. This partnership makes OpenAI’s advanced AI capabilities more accessible to enterprise customers who already use AWS. It allows businesses to leverage OpenAI’s technology while staying within their existing AWS ecosystem, potentially accelerating AI adoption in enterprise environments. OpenAI Codex is an AI coding agent released in April 2025, capable of writing code, fixing bugs, handling feature development, and managing pull requests. It is available through ChatGPT web app, Codex CLI, desktop apps for Windows and macOS, and IDE integrations. Amazon Bedrock Managed Agents powered by OpenAI enables customers to build production-ready agents in the cloud.
rss · Hacker News - OpenAI / Anthropic / Gemini / DeepSeek · Apr 28, 17:13
Background: Amazon Bedrock is AWS’s fully managed service for building generative AI applications. OpenAI Codex is an autonomous AI coding agent designed to handle software engineering tasks in cloud-based sandboxes. The integration allows AWS customers to use OpenAI’s models within their existing AWS accounts and security frameworks.
References
Tags: #OpenAI, #AWS, #Cloud Computing, #AI Services, #Partnership
Anthropic Joins Blender Development Fund as Corporate Patron ⭐️ 7.0/10
Anthropic has joined the Blender Development Fund as a Corporate Patron, supporting the open-source 3D modeling software Blender. This sponsorship marks a notable industry development where an AI company directly supports open-source 3D tools, potentially signaling future AI-assisted 3D content creation capabilities. As a Corporate Patron, Anthropic contributes financial support to ensure Blender’s continued development. The deal was announced on Blender’s official press page, with the news generating significant discussion on Hacker News (240 points, 189 comments).
rss · Hacker News - OpenAI / Anthropic / Gemini / DeepSeek · Apr 28, 16:07
Background: Blender is a popular open-source 3D creation suite used by millions worldwide for modeling, animation, rendering, and compositing. The Blender Development Fund provides sustainable funding through various sponsorship tiers, with Corporate Patron being one of the higher contribution levels. Anthropic, creator of Claude AI, is now joining other corporate sponsors in supporting this open-source project.
References
Discussion: The Hacker News discussion shows strong community interest, with speculation about why Anthropic might be interested in Blender—ranging from potential Claude integration with 3D tools to general AI industry support for creative open-source projects. Some commenters see this as a positive sign for open-source sustainability.
Tags: #open-source, #Anthropic, #Blender, #corporate sponsorship, #AI/3D tools
Limits of Self-Improving LLMs: No Singularity Without Symbolic Synthesis ⭐️ 7.0/10
A new arXiv paper (2601.05280v2) by Hector Zenil argues that large language models cannot achieve singularity through self-improvement alone, claiming that symbolic model synthesis is a necessary condition for true AGI-level self-enhancement. This paper challenges popular AGI timeline beliefs and questions whether current LLM scaling approaches can lead to superintelligence. It directly addresses the AI safety and AGI community’s debates about rapid self-improvement capabilities. The paper specifically examines whether LLMs can iteratively improve their own capabilities without external symbolic reasoning. It argues that pattern recognition alone (what LLMs do) is insufficient for the kind of recursive self-improvement required for singularity-level intelligence.
rss · Lobsters - AI · Apr 28, 16:43
Background: Self-improving AI refers to systems that can enhance their own capabilities without human intervention. Symbolic model synthesis is an approach combining neural networks with symbolic AI, allowing programs to use symbolic primitives. The ‘singularity’ in AI context refers to a hypothetical future point where AI surpasses human intelligence and recursively improves itself exponentially. This paper argues current LLMs lack the symbolic reasoning needed for such recursive self-improvement.
References
Discussion: Lobsters上的讨论显示了浓厚的 技术兴趣,评论者就该论文关于符号推理是否真的是AGI所必需的论点展开了辩论,并质疑该论文是否充分探讨了自论文最初提交以来出现的新型自我改进方法(如Constitutional AI)
Tags: #AI safety, #LLM limitations, #singularity, #machine learning, #AGI
Using Snowflake Cortex Agents with Structured Data ⭐️ 7.0/10
A technical practice article demonstrating how to use Snowflake Cortex Agents to maximize value from structured data in the Snowflake data cloud platform. This represents an important development for data engineers and analytics practitioners as it demonstrates how AI agents can be integrated with a major data platform to process and analyze structured data using natural language. Cortex Agents allows users to interact with data through the agent:run API, requiring either the SNOWFLAKE.CORTEX_USER or SNOWFLAKE.CORTEX_AGENT_USER role. The agent can pull from data, answer complex questions, and handle multi-turn conversations.
rss · InfoQ 中文站 · Apr 28, 15:36
Background: Snowflake Cortex is Snowflake’s AI/ML capability that integrates AI functions into their data cloud platform. Cortex Agents are AI agents that can understand natural language queries and interact with structured data stored in Snowflake. This represents an emerging AI capability being integrated into enterprise data platforms.
References
Discussion: This technical practice article is significant for data engineers looking to leverage AI capabilities within Snowflake. The integration of AI agents with structured data represents a growing trend in data platforms enabling more intuitive data analysis.
Tags: #Snowflake, #Cortex Agents, #AI/ML, #Data Engineering, #Cloud Data Platform
Grafana Refactors Loki with Kafka, Releases CLI Tool for AI Agents ⭐️ 7.0/10
Grafana has rearchitected its Loki log aggregation system using Apache Kafka, and released a new CLI tool designed to bring observability capabilities to AI coding agents. This development demonstrates the evolving data pipeline design in the observability space and shows the growing convergence of DevOps and AI agent workflows. The integration enables AI coding agents to leverage structured logging and metrics for enhanced debugging and system monitoring. The Kafka-based refactoring allows Loki to handle high-throughput log streams more efficiently as a distributed event streaming platform. The new CLI tool provides AI agents with direct command-line access to observability data, enabling them to query logs and metrics during code execution.
rss · InfoQ 中文站 · Apr 28, 15:00
Background: Loki is Grafana’s open-source log aggregation system designed to store and query logs at scale, functioning similarly to Prometheus but for logs. Apache Kafka is a widely-used distributed event streaming platform for building real-time data pipelines. This architectural change reflects the industry trend of integrating observability deeper into AI-powered development workflows.
Tags: #observability, #Grafana, #Loki, #Kafka, #AI agents, #DevOps
ClickHouse Full-Text Search on Object Storage ⭐️ 7.0/10
ClickHouse has rearchitected its full-text search capabilities to run efficiently on object storage, enabling high-performance FTS without dedicated search infrastructure. This represents a significant advancement by combining two major cloud-native technologies — ClickHouse’s analytical database capabilities with object storage — potentially reducing infrastructure complexity and costs for organizations needing full-text search at scale. The new index layout is optimized for sequential access patterns, allowing the system to respond to many queries directly from the index structure without needing to read the actual indexed text columns, significantly reducing I/O overhead.
rss · InfoQ 中文站 · Apr 28, 14:26
Background: ClickHouse is a columnar OLAP database known for its exceptional query performance on large-scale analytical workloads. Object storage is a cloud storage architecture that stores data as objects, offering scalability and cost-effectiveness compared to traditional block storage. Full-text search typically requires specialized search engines like Elasticsearch, which maintain inverted indexes optimized for random access patterns.
References
Tags: #ClickHouse, #Full-Text Search, #Object Storage, #Database, #Search Infrastructure
Cloudflare Sandboxes GA: Persistent Isolation for AI Agents ⭐️ 7.0/10
Cloudflare announced the general availability of Sandboxes and Cloudflare Containers at its Agents Week event, providing AI agents with persistent, isolated Linux environments featuring real computers with shell, file systems, and background processes that can start on-demand and resume from where they left off. This addresses a critical gap in AI agent infrastructure by enabling stateful, secure execution environments that can persist across sessions. As AI agents become more complex and handle multi-step workflows, having dedicated compute resources that maintain state is essential for building reliable AI applications. The GA release adds security credential injection, PTY terminal support, persistent code interpreter, file system monitoring, snapshot-based session recovery, and usage-based billing. The technical implementation uses Linux kernel-level isolation with namespaces and cgroups to strictly limit CPU, memory, network, and filesystem resources.
rss · InfoQ 中文站 · Apr 28, 11:00
Background: AI agents typically need persistent execution environments to handle complex, multi-step tasks, but traditional cloud infrastructure often provides stateless serverless functions. Cloudflare Sandboxes addresses this by giving AI agents their own dedicated compute resources with full Linux environment, shell access, and filesystem rather than ephemeral function calls.
Tags: #Cloudflare, #AI Agents, #Security, #Sandboxing, #Cloud Infrastructure
China Adds 38 New Undergraduate Majors Including Embodied AI ⭐️ 7.0/10
China’s Ministry of Education announced 38 new undergraduate majors for 2026, covering areas such as energy science and engineering, agricultural robotics, biomanufacturing, brain-computer science, digital tourism, and digital trade. Nine universities including Harbin Institute of Technology will offer embodied AI (具身智能) as a major, and a cross-disciplinary category was added for the first time with 15 majors. This represents China’s strategic workforce development for emerging technologies. The first-time inclusion of a cross-disciplinary category signals strong strategic priority on integrating AI with the real economy (实体经济). It reflects China’s approach to building a talent pipeline for embodied AI, which is widely viewed as the next wave of AI after large language models. The undergraduate catalog now covers 13 discipline categories, 92 specialty areas, and 883 total majors. During the ‘14th Five-Year Plan’ period, cumulative major adjustments exceeded 30%, with this year being the first to surpass 10% adjustment in a single year. The 4 newly-established cross-disciplinary majors include embodied AI.
telegram · zaihuapd · Apr 28, 08:52
Background: 具身智能 (Embodied AI) refers to artificial intelligence systems that combine an AI ‘brain’ with a physical body to perceive, reason, and interact with the real physical world. Unlike traditional AI that exists only in digital space, embodied AI enables machines to understand physical environments through sensors and actuators—typically embodied in robots like humanoid robots. This concept was first proposed by Alan Turing in 1950 and is considered the third wave of AI development after symbolic AI and deep learning.
References
- 什么是具身智能?看这篇就够了! - 知乎
- 什么是“具身智能”? 和人形机器人有什么关系?-新华网
- 具身智能(智能体通过身体将感知、行动与认知深度融合的智能系统)_... 什么是“具身智能”? 和人形机器人有什么关系?-新华网 具身智能(Embodied AI)的概念、核心要素、难点及突破性进展-CSDN博... 具身智能入门:AI 迈向物理世界的进化之路【2026】 | QubitTool 具身机器人,何为「具身」? - 少数派 什么是 具身智能 ?看这篇就够了! - 知乎 具身 智能 (Embodied AI)的概念、核心要素、难点及突破性进展 具身 机器人,何为「 具身 」? - 少数派 具身 智能 (Embodied AI)的概念、核心要素、难点及突破性进展 写给小白的“具身智能”入门科普 - 36氪
Tags: #中国教育政策, #人工智能, #具身智能, #本科专业调整, #交叉学科
CAC Cracks Down on CapCut, Dreamina AI Over AI Content Labeling Failures ⭐️ 7.0/10
On April 28, 2026, China’s Cyberspace Administration (CAC) announced enforcement actions against major AI content platforms including CapCut (剪映), CatBox (猫箱), and Dreamina AI (即梦 AI) for failing to properly implement AI-generated content identification requirements. This represents the first major regulatory enforcement action demonstrating concrete compliance expectations for AI content identification in China. The platforms involved have millions of users, making this a significant signal to the entire industry about regulatory priorities. The platforms were subjected to measures including regulatory interviews, mandatory rectification orders, administrative warnings, and strict handling of responsible persons. The CAC emphasized that platforms cannot use technological innovation as an excuse to bypass compliance requirements.
telegram · zaihuapd · Apr 28, 09:29
Background: China has been developing a comprehensive AI governance framework, including requirements that AI-generated content must be clearly identifiable. This regulatory action follows the enforcement of China’s AI content identification regulations and demonstrates active监管 enforcement as generative AI applications rapidly普及.
Discussion: Industry observers see this as a clear signal that China is serious about AI content transparency requirements. The enforcement action sets a precedent that other platforms handling AI-generated content will need to implement proper identification mechanisms or face similar penalties.
Tags: #AI regulation, #content moderation, #China tech policy, #AI governance, #platform compliance
Google Signs Classified AI Deal with Pentagon Worth $200M ⭐️ 7.0/10
Google has signed a classified agreement with the US Department of Defense to provide its AI models for classified military work, including mission planning and weapons targeting, worth up to $200 million per contract. This deal marks a significant escalation in Big Tech’s direct involvement with military AI applications, raising critical ethical questions about AI’s role in weapons systems and defense operations. It reflects a broader trend of Silicon Valley companies partnering with the Pentagon despite internal employee opposition. The agreement permits AI use for any lawful government purpose but prohibits deployment for domestic mass surveillance or autonomous weapons without human oversight. Google cannot veto legitimate government operational decisions, while Anthropic was previously designated as a supply chain risk after refusing to relax safety restrictions.
telegram · zaihuapd · Apr 28, 11:47
Background: This deal follows a 2025 agreement where the Pentagon signed contracts worth up to $200 million each with Google, OpenAI, and Anthropic. The US Department of Defense has been rapidly integrating AI into military operations, including targeting support and predictive logistics. Google previously faced internal protests over its involvement in Project Maven, a military AI drone analysis project.
References
Tags: #AI policy, #military AI, #Google, #Pentagon, #tech ethics
LiteLLM SQL Injection Allows Unauthenticated API Key Theft ⭐️ 7.0/10
A critical SQL injection vulnerability (CVE-2026-42208) was discovered in LiteLLM’s proxy component, allowing unauthenticated attackers to extract stored API keys by crafting malicious Bearer tokens that get logged during authentication failures. This vulnerability is highly critical because it affects any publicly exposed LiteLLM instance, allowing attackers to steal API keys for OpenAI, Anthropic, Azure, and other LLM providers without any credentials, potentially leading to unauthorized usage and financial impact. The vulnerability is fixed in LiteLLM v1.83.7-stable. Users should upgrade immediately and rotate all stored API keys. As a mitigation, setting disable_error_logs: true in the config can prevent the injection vector from being triggered.
telegram · zaihuapd · Apr 28, 14:44
Background: LiteLLM is an open-source library providing a unified interface to call over 100 LLM providers (OpenAI, Anthropic, Azure, Bedrock, etc.) using the OpenAI format. Its Proxy component acts as an AI Gateway for enterprise deployments. OAuth 2.0 Bearer tokens are the standard authentication method for accessing protected API endpoints.
References
Tags: #security vulnerability, #SQL injection, #CVE-2026-42208, #LiteLLM, #API key exposure
Warp Open-Sources Client Code Base for AI-Powered Terminal Development ⭐️ 7.0/10
Warp has open-sourced its terminal-based intelligent development environment client code base, with the UI framework under MIT license and the remaining code under AGPL v3 license. This open-source release matters because OpenAI as the founding sponsor and the hybrid licensing model demonstrate a commitment to the developer tools ecosystem, while the support for multiple AI agent integrations enables an open ecosystem for AI-driven development workflows. The agent management workflow is powered by GPT models and supports built-in coding agents as well as custom CLI agents including Claude Code, Codex, and Gemini CLI. The AGPL v3 license specifically requires modifications to be shared back when the software is used over a network.
telegram · zaihuapd · Apr 28, 17:04
Background: AGPL v3 (GNU Affero General Public License) is a copyleft license designed for network-use software, requiring that any modifications made when running the program on a server be made available to users. AI coding agents are autonomous agents that can receive task descriptions, write code, and open pull requests with human review at defined checkpoints.
References
Tags: #开发者工具, #终端模拟器, #开源, #AI编程, #Warp
NVIDIA Releases Nemotron 3 Super: 120B Open-Weight MoE Model for Multi-Agent AI ⭐️ 7.0/10
NVIDIA officially released Nemotron 3 Super, a 120-billion parameter open-weight large language model that activates only 120 billion parameters during inference. The model combines Mamba state-space layers with Transformer layers using a Mixture of Experts (MoE) architecture, supporting a 1 million token context window. 此版本对多智能体AI系统意义重大,因为它提供了一个具有超大上下文窗口的开源权重模型,性能提升显著——吞吐量最高提升5倍,准确率最高提升2倍。宽松的许可协议使其可用于研究和商业应用,可能加速复杂多智能体AI工作流程的开发。 The model uses a hybrid architecture combining Mamba SSM layers for efficient long-sequence processing with Transformer layers for strong language understanding. The MoE design allows routing to specialized experts, enabling massive parameter count while keeping inference compute manageable by activating only a subset of parameters per token.
telegram · zaihuapd · Apr 29, 00:00
Background: Mamba is a state-space model (SSM) architecture developed to address Transformers’ limitations in processing long sequences, offering subquadratic computational complexity. Mixture of Experts (MoE) is an architecture technique used by frontier models like GPT-4 and DeepSeek that splits neural networks into specialized experts, activating only a few per token to scale parameters without proportional inference cost increases.
References
Tags: #NVIDIA, #LLM, #MoE architecture, #Multi-agent AI, #Open weights