🟣 EV Daily: US doubles down on data and energy
Five things to know today + future of work.
🏭 Lead story: US doubles down on data and energy
Donald Trump and Senator Dave McCormick unveiled a $70 billion-plus package of private commitments to build and power a crop of new data centres in Pennsylvania and Ohio. Google will pour $25 billion into new data-centre capacity across the Pennsylvania-Jersey-Maryland grid, underwritten by a $3 billion, 670MW hydropower deal with Brookfield, an asset manager, that can scale to 3GW. Blackstone matched the bet with a $25 billion plan to co-locate data centres and gas generation, while CoreWeave pledged $6 billion for an AI-specific campus outside Lancaster. This is industrial policy by other means. Cheap, controllable power—rather than clever code—has become the decisive input for frontier models, and partisan politics is rushing to supply it. By bundling electrons, real estate and job guarantees into a single narrative, Republicans are positioning energy sovereignty as the new logic board of national AI advantage. [Semafor]
Key signals, quick scan
A 30-second scan of four secondary signals that hint at where the curve is bending.
🌏After months of uncertainty, US regulators have again cleared Nvidia’s downgraded H20 GPU for export, reopening a pipeline worth billions. The flip-flop shows policy is fluid—and that hardware downgrades remain a viable workaround to access China’s vast AI spend. [FT]
🧪An AI-powered laboratory can run itself and discover materials 10 times faster than the best human researchers. The dynamic-flow technique lets autonomous materials-discovery rigs collect a data point every 0.5 seconds instead of one per experiment—unlocking just-in-time discovery for battery, catalyst and semiconductor R&D. [ScienceDirect]
👩💻OpenAI is developing features that let ChatGPT quickly create and edit presentations and spreadsheets directly compatible with PowerPoint and Excel. Microsoft’s old friend is now a direct rival. [The Information]
🪖DJI ships millions of drones a year—outpacing the entire US industry (a 500-company-strong sector) by more than 20x. It gives Beijing a scale and cost advantage in both civilian and dual-use drone tech just as unmanned systems enter mainstream logistics and warfare. [NYT]
Future of work: Optimising your AI collaboration
When should an AI decide on its own, and when should a human see its confidence score? MIT economists put 3,500 volunteers through a fact-checking exercise to find out.
They modeled V(x)—the proportion of correct human decisions when shown an AI confidence score of x%—and used it to evaluate five hand-off policies. The winner was a two tier rule:
Auto-accept AI answers above a high-confidence threshold (say 90%)
Show the exact percentage for everything else and let the human decide.
This nudged accuracy from 78 % to 80.5 %, a 2.5-point gain over full human review with total AI transparency. The worst approach—hiding the AI score entirely—lagged by seven points.
The winning policy tackles a key challenge: humans often fail to trust even highly confident AI predictions, leading to suboptimal results. By estimating your own V(x) curve for specific tasks, you can set smarter thresholds: automate ultra-confident outputs and present exact scores where human judgment adds the most value.
I’ve been asking most of the LLM tools I use (Perplexity, foundation LLMs, etc.) to provide a confidence score out of 100 when responding to my questions, as a way to gauge the accuracy of their answers. Interestingly, I’ve never seen a score lower than 90/100… Either they’re calibrated to be overconfident, or my questions are just too mundane. :-)