In an interview with CNBC, AMD CEO Lisa Su said that she expects us to go from “a billion active users in AI today to over 5 billion active users over the next five years”, requiring compute to increase 100 times to meet demand. This is what all major AI companies are working toward; a world where every connected human is using AI on a daily basis for almost every single task.
It’s easy to become addicted to using AI. It knows everything and can do everything. You ask it and it performs. The big question is the long term effects of this behavior. If everyone is not doing anything other than asking the AI to do things, how will this affect us long-term as humans? Lots of people already work this way today as middle-line managers, but there is a reason managers bring engineers to meetings – it’s because having someone who actually knows the craft often brings a different perspective to the discussion.
A few weeks ago in Sweden an AI-generated song made it to the #1 most listened song on Spotify. We are not only using AI – we are also consuming AI at a rapidly increasing pace. A recent study showed that over 20% of all YouTube shorts are AI generated, and we are still in the very early days of generated AI content. I don’t think we can avoid this future, but I also think very few people can accurately estimate the long-term effects of everyone in the world using AI daily for both work and consumption.
Thank you for being a Tech Insights subscriber!
Listen to Tech Insights on Spotify: Tech Insights 2026 Week 3 on Spotify
THIS WEEK’S NEWS:
- NVIDIA Vera Rubin Enters Full Production at CES 2026
- Google partners with Boston Dynamics
- OpenAI Launches ChatGPT for Healthcare and API Platform
- OpenAI Launches ChatGPT Health with Personal Health Data Integration
- AI Model Uses One Night Of Sleep To Predict Disease Risk
- No, Microsoft did not rebrand Office to Copilot
- Microsoft Launches Copilot Checkout and Brand Agents
- Cursor Introduces Dynamic Context Discovery for Coding Agents
- Nvidia and Mercedes-Benz Launch Alpamayo Autonomous Driving Platform
- xAI Raises $20 Billion in Series E Funding Round
- Anthropic Cuts xAI Access to Claude Models in Cursor
NVIDIA Vera Rubin Enters Full Production at CES 2026
https://nvidianews.nvidia.com/news/rubin-platform-ai-supercomputer

The News:
- NVIDIA announced the Vera Rubin platform, a six-chip AI system now in full production, with volume shipments targeting second half 2026.
- The platform reduces inference token costs by 10x compared to Blackwell through extreme co-design across the Rubin GPU, Vera CPU with 88 custom Olympus ARM cores, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch.
- Training a 10 trillion parameter mixture-of-experts model requires 75% fewer GPUs than Blackwell, with a 1MW cluster delivering 10 million tokens per second at the same power versus 1 million tokens per second on Blackwell.
- The Rubin GPU delivers 50 petaFLOPS of NVFP4 AI inference with 336 billion transistors across two compute dies, representing 5x performance increase from Blackwell’s 10 petaFLOPS.
- HBM4 memory provides 22 TB/s bandwidth with 288GB capacity per GPU, up from NVIDIA’s initial 13 TB/s target through silicon improvements without compression.
- NVLink 6 delivers 3.6 TB/s bidirectional bandwidth per GPU and 260 TB/s across the Vera Rubin NVL72 rack, with in-network compute providing 14.4 TFLOPS FP8 for collective operations.
- Assembly time drops from two hours to five minutes using modular, cable-free tray design with 45°C hot-water cooling.
My take: 10x lower cost per token for inference. Around 5 times faster than current state-of-the-art Blackwell chips while delivering 10x throughput at the same power usage. And can be assembled in five minutes. I can only imagine what the AI models of 2027 will be able to do running on this hardware.
Google partners with Boston Dynamics
https://www.wired.com/story/google-boston-dynamics-gemini-powered-robot-atlas

The News:
- Boston Dynamics announced its production-ready Atlas humanoid robot will integrate Google DeepMind’s Gemini Robotics AI foundation models, starting deployment at Hyundai factories and Google facilities in 2026.
- The electric humanoid can lift objects up to 50 kg (110 lbs) and operates in temperatures from -20° to 40° C.
- Atlas uses Gemini’s multimodal processing to interpret visual sensor data and natural language commands simultaneously, allowing factory managers to issue verbal instructions like “That door panel is scratched, put it in the reject pile” instead of writing coordinate-based code.
- The robot learns autonomously from its environment and can share learned tasks across an entire fleet of Atlas units.
- Atlas includes 360-degree vision to detect nearby humans, pausing work when people enter its working space before resuming operations once clear.
- Hyundai plans to manufacture 30,000 Atlas units per year at a dedicated robotics factory, with integration into Hyundai Motor Group Metaplant America scheduled for 2028.
My take: You have probably already seen plenty of videos of Boston Dynamics Atlas, and it finally seems like it’s ready for mass production. The robot runs on battery for 4 hours, can change it’s own batteries when needed, can lift objects up to 50kg and operates down to -20 degrees Celcius. It will be clunky, it will look slow and weak, and people will laugh at it. But anyone who understands the laws of technical evolution can very easily see where these robots are heading in 5-10 years; robots working 24/7, lifting 200kg heavy objects, and have superhuman speed and precision. Any company with manufacturing capabilities should follow this development very close.
Read more:
OpenAI Launches ChatGPT for Healthcare and API Platform
https://openai.com/index/openai-for-healthcare

The News:
- OpenAI launched ChatGPT for Healthcare on January 7, 2026, a HIPAA-compliant workspace that gives healthcare organizations access to GPT-5.2 models trained on clinical, research, and administrative tasks.
- The platform integrates with enterprise systems like Microsoft SharePoint to incorporate institutional policies and care pathways, drawing from millions of peer-reviewed studies with citations including journal names and publication dates.
- Organizations receive access management through SAML SSO and SCIM, customer-managed encryption keys, audit logs, and Business Associate Agreements. OpenAI states patient data is not used for model training.
- Early adopters include AdventHealth, Baylor Scott & White Health, Boston Children’s Hospital, Cedars-Sinai Medical Center, HCA Healthcare, Memorial Sloan Kettering Cancer Center, Stanford Medicine Children’s Health, and UCSF.
- The OpenAI API platform supports developers building tools like ambient listening and automated clinical documentation. Companies including Abridge, Ambience, and EliseAI use the API with Business Associate Agreements.
- A study with Penda Health found that an OpenAI-powered clinical copilot reduced both diagnostic and treatment errors in routine primary care settings.
My take: This press release is not just about some US hospitals rolling out ChatGPT. It’s about OpenAI launching fine-tuned versions of GPT-5.2 specifically built for clinical, research, and administrative tasks. No benchmarks have been published yet, and it will be very interesting to see how well these models perform against human experts once they have been released.
OpenAI Launches ChatGPT Health with Personal Health Data Integration
https://openai.com/index/introducing-chatgpt-health

The News:
- OpenAI announced ChatGPT Health on January 7, 2026, a separate space within ChatGPT where users can securely connect medical records and wellness apps including Apple Health, MyFitnessPal, Function, Weight Watchers, AllTrails, Instacart, and Peloton.
- The feature operates in an isolated environment with purpose-built encryption separate from regular ChatGPT conversations. Health conversations, connected apps, and files are stored separately and are not used to train OpenAI’s foundation models.
- Users can upload medical records through b.well’s health data connectivity platform (U.S. only at launch), and Apple Health integration requires iOS. The AI explains lab results, prepares appointment questions, interprets wearable data, and summarizes care instructions.
- OpenAI collaborated with over 260 physicians from 60 countries who provided feedback on more than 600,000 model outputs across 30 areas of focus. The health model is evaluated using HealthBench, a framework that assesses clinical reasoning, safety, uncertainty handling, and communication quality rather than exam-style questions.
- The feature launched to a limited group of early users through a waitlist and is unavailable in the European Economic Area, Switzerland, and the United Kingdom. OpenAI states over 230 million people globally ask health and wellness questions on ChatGPT each week.
- Medical record integrations are available in the U.S. only at launch. Users with ChatGPT Free, Go, Plus, and Pro plans outside restricted regions can join the waitlist, with broader web and iOS availability planned for the coming weeks.
My take: This really has the potential to make health care so much more efficient. Having full health records of patients such as sleep data, activity, blood oxygen levels and more makes the AI models so much better at analyzing actual test samples later. This is clearly not launching in the EU anytime soon, but maybe if this proves to be as valuable as I believe it can be, hopefully EU might loosen up the regulations a bit in the next 50 years so we can all get better health care services for our ever-aging population.
AI Model Uses One Night Of Sleep To Predict Disease Risk
https://www.nature.com/articles/s41591-025-04133-4

The News:
- Stanford and collaborators present SleepFM, a multimodal sleep foundation model that uses overnight polysomnography data to predict future risk for more than 100 diseases, published in Nature Medicine on 5 January 2026.
- The team trained SleepFM on about 585,000 hours of sleep recordings from roughly 65,000 people, covering signals such as brain activity, heart activity, respiration, eye movements and body movements.
- SleepFM identified 130 disease categories whose risk could be predicted with reasonable accuracy, including dementia, various cancers, cardiovascular disease, chronic kidney disease, pregnancy complications and all cause mortality.
- For several outcomes, the model achieved concordance indices above 0.8, for example around 0.84 for all cause mortality and 0.85 for dementia, which is considered high discrimination in clinical research.
- On standard sleep tasks such as sleep stage classification and sleep apnea severity, SleepFM matched or exceeded existing state of the art models, with reported mean F1 scores around 0.70 to 0.78 for staging and accuracies up to 0.87 for apnea presence.
- The model uses a contrastive learning approach that can ingest different polysomnography channel configurations, and the authors report successful transfer to external cohorts like the Sleep Heart Health Study that were not used in pretraining.
My take: Before you start trying to connect SleepFM to your Garmin or Oura ring, this thing requires special hardware. It needs an polysomnography machine to measure things like EEG (brain), electrocardiography (heart), electromyography (muscle), pulse, and breathing airflow. Based on all those factors I think it’s reasonable to identify 130 different disease risks using a model trained on all these types of sleep data. Impressive work by Stanford and a lovely name for an AI model.
No, Microsoft did not rebrand Office to Copilot

The News:
- Last week many online news sources including the popular newsletter Rundown AI reported that “Microsoft renamed its Office 365 productivity suite to “Microsoft 365 Copilot app”.
- The news spread all over Hacker News, X and Reddit.
- The news is incorrect. Microsoft did not changed the name of Microsoft 365, previously called Office 365, to “Microsoft 365 Copilot app”.
My take: Last week Microsoft updated www.office.com with a new banner “Welcome to the Microsoft 365 Copilot app”. Contrary to what most people though, Microsoft has NOT renamed Microsoft 365 to Copilot. Quick history lesson: in February 2019, before “Office 365” was renamed to “Microsoft 365”, Microsoft launched a separate app called “Office”, often referred to as the “Office Hub”. In November 2022 this “Office” app was renamed to the “Microsoft 365 app”, and in January 2025 it was again renamed to the “Microsoft 365 Copilot app”. The web site www.office.com has always been the front-end to to this specific app, not the Microsoft 365 suite.
So why are everyone just catching up on this now, a year later? It’s because the web site www.office.com was updated last week showing “Welcome to the Microsoft 365 Copilot app”, and everyone is quickly rushing to conclusions before even spending 2 minutes researching what happened, including major newsletters like the Rundown AI. It just shows you the amount of time they spend on each news item before publishing.
Read more:
- No, Microsoft didn’t rebrand Office to Microsoft 365 Copilot | The Verge
- Microsoft Renamed Office to “Microsoft 365 Copilot app” – YouTube
- Microsoft have lost their minds – YouTube
- Microsoft Office is Dead, welcome to “The Microsoft 365 Copilot app (formerly Office)” : r/sysadmin
Microsoft Launches Copilot Checkout and Brand Agents

The News:
- Microsoft launched Copilot Checkout, an in-chat commerce feature that lets users complete purchases without leaving the Copilot interface on Copilot.com in the U.S. Merchants retain their merchant-of-record status, own transaction data, and maintain customer relationships.
- The service integrates with PayPal, Shopify, Stripe, and Etsy, with Shopify merchants automatically enrolled after an opt-out window while PayPal and Stripe merchants must apply. Launch merchants include Urban Outfitters, Anthropologie, Ashley Furniture, and Etsy sellers.
- Microsoft reports that journeys including Copilot resulted in 53% more purchases within 30 minutes compared to those without, and sessions with shopping intent were 194% more likely to result in a purchase.
- Brand Agents, available now for Shopify merchants, provides AI shopping assistants trained on merchant product catalogs that deploy on brand websites. Alexander Del Rossa saw over 3X higher conversion rates in Brand Agent-assisted sessions versus unassisted sessions.
- Both features integrate with Microsoft Clarity analytics, providing dashboards that track engagement rates, conversion uplift, and average order value. The deployment requires installing Clarity on Shopify stores and joining the waitlist.
My take: If you remember back in September, OpenAI launched ChatGPT Instant Checkout , and here is now Microsoft with a similar solution. Both solutions use the Agentic Commerce Protocol (ACP) standard. What seemed strange to me in this pressrelease is that Microsoft says that “Journeys that include Copilot led to 53% more purchases within 30 minutes of interaction compared to those without”, however their only source to this claim is a footnote saying “Microsoft Internal Data, August 2025”. This means that the only backup claim for this figure is some internal benchmark Microsoft did in August long before this product was even finished. Apparently putting out empty facts like this works, just look at the Office debacle above, reviewers and AI bots today scan though articles faster than ever in their eagerness to collect newsworthy headlines.
Read more:
Cursor Introduces Dynamic Context Discovery for Coding Agents
https://cursor.com/blog/dynamic-context-discovery

The News:
- Cursor introduced dynamic context discovery, a technique that stores long tool outputs, chat history, and terminal sessions as files instead of loading all data into the agent’s context window upfront.
- The agent retrieves specific information using commands like tail or grep when needed, rather than processing large JSON responses that bloat the context window.
- For MCP servers with many tools, Cursor syncs tool descriptions to a folder so the agent looks up tools on demand. An A/B test showed this reduced total agent tokens by 46.9% in runs that called an MCP tool.
- Chat history becomes accessible as files during summarization, allowing the agent to search through prior context to recover details lost during compression.
- The system supports Agent Skills, an open standard where skills are files containing domain-specific instructions. The agent uses grep and semantic search to pull in relevant skills dynamically.
- Terminal outputs sync to the local filesystem automatically, letting users ask questions like “why did my command fail?” without copy-pasting output into the agent.
My take: This might be good for models that are inherently poor at using their context, like Claude Opus 4.5. As soon as you fill up more than 30-40% of the context in Claude Opus things begin to go haywire, so you always need to keep things small and isolated and fight the eagerness.
That said, dynamic context discovery is an interesting concept, but Cursor did not provide any benchmarks how this affects the model’s understanding of complex code bases or ability to generate source code that adhere to current architectural guidelines. The only figure we got was 47% reduced token count. So I won’t be spending my time on this. If you need lots of context you should use GPT-5.2. It’s the king of context.
Nvidia and Mercedes-Benz Launch Alpamayo Autonomous Driving Platform

The News:
- Nvidia released Alpamayo, a family of open-source AI models and tools designed to handle rare driving scenarios in autonomous vehicles. Mercedes-Benz will ship the first production vehicles with the complete Alpamayo stack in Q1 2026 in the United States, followed by Europe in Q2 2026.
- The core technology is Alpamayo 1, a 10-billion parameter vision-language-action model that uses chain-of-thought reasoning. The model processes video input from multiple cameras and outputs both driving trajectories and explanations for its decisions.
- Mercedes-Benz CEO Ola Kallenius tested the system through San Francisco and Silicon Valley, driving uninterrupted through heavy traffic without intervention, describing it as “level 2 plus”. The vehicle uses 30 sensors including 10 cameras, 5 radar sensors, and 12 ultrasonic sensors.
- Jensen Huang stated the collaboration took several thousand people and at least five years of work. Nvidia will operate and maintain the stack long-term, marking its first full-stack autonomous vehicle effort.
- The platform includes AlpaSim, an open-source simulation framework, and over 1,700 hours of driving data released publicly. Additional partners showing interest include Lucid, JLR, Uber, and Berkeley DeepDrive.
- The model was trained on 80,000 hours of multi-camera driving data with 700,000 chain-of-causation reasoning traces. In closed-loop evaluation using AlpaSim, it achieved a score of 0.72 on the PhysicalAI-AV-NuRec dataset.
My take: Mercedes-Benz with Alpamayo now also operates at Level 2, the same autonomy level as Tesla’s Full Self-Driving. But where Tesla are able to make it work with just eight 5-megapixel cameras, the Alpamayo system requires 10 cameras, 5 radar sensors, and 12 ultrasonic sensors. This is nowhere near a mass market launch, and even while MB says they will ship the first production vehicles in Q1 and Q2 respectively, vehicles with these specification are very very far from mass production.
xAI Raises $20 Billion in Series E Funding Round

The News:
- xAI closed a $20 billion Series E funding round, surpassing the initial $15 billion target, reaching a post-money valuation of approximately $230 billion. The capital funds compute infrastructure expansion, next-generation model training, and product development across consumer and enterprise segments.
- Investors include Valor Equity Partners, Fidelity Management & Research Company, Qatar Investment Authority, MGX, Baron Capital Group, and StepStone Group. Strategic investors Nvidia and Cisco participated to support GPU cluster buildout.
- The company operates over one million H100 GPU equivalents through its Colossus I and II systems in Memphis, Tennessee, claiming to run some of the largest AI computing clusters. Training for Grok 5 is underway following the release of Grok 4, Grok Voice, and Grok Imagine in 2025.
- xAI offers Grok Business at $30 per user per month and Grok Enterprise with custom single sign-on and user provisioning. The Collections API costs $2.50 per 1,000 searches with initial free indexing and storage.
My take: It remains to be seen if xAI can close the gap to OpenAI with enough GPUs in their infrastructure. With $20 billion extra cash they will be able to continue expanding their infrastructure all throughout 2026, so this is in preparation for models to be launched in 2027. The recent xAI bikini scandal however clearly shows that you need more skills in an AI company than just engineers, or you will very quickly be heading the very wrong way with your AI agents.
Anthropic Cuts xAI Access to Claude Models in Cursor
https://twitter.com/kyliebytes/status/2009686466746822731

The News:
- Anthropic blocked xAI staff from accessing Claude models through Cursor IDE this week. The enforcement targets xAI’s use of Claude for competitive AI development, specifically training alternative systems.
- xAI cofounder Tony Wu informed staff on Wednesday that Anthropic stopped responding to Cursor requests. Wu stated “According to Cursor, this is a policy Anthropic is enforcing against all its major competitors”.
- Anthropic’s Commercial Terms of Service Section D4 prohibits clients from using services to create competing products or train AI models. The action represents independent enforcement of existing commercial agreements rather than a coordinated strategy.
- xAI had been using Cursor with Claude models to accelerate internal development workflows. Removing access mid-cycle disrupts key engineering productivity at the competing AI lab.
- Anthropic declined to comment on the action. Cursor directed inquiries to Anthropic, while xAI did not respond to requests for comment.
My take: Claude Code with Opus 4.5 is quickly getting traction, and it’s easy to understand why. It’s a friendly and fast environment to work with, and you get virtually instant results. It’s spreading rapidly in almost every company, xAI included. For many it’s the tool use and integrations that makes working with Claude Code so appealing. Claude Opus 4.5 is still quite bad at writing high quality source code (my bar is quite high), but for everything else when it comes to using a computer from the terminal it’s actually quite good. The same day the team got the internal memo in their inboxes, Elon Musk went public promising a major upgrade that will one-shot many complex coding tasks. This is about where GPT-5.2 is at right now, so let’s see how things evolve next month.
Read more:
