Houston's data center scene has received its latest bullish forecast. Photo via serverfarmllc.com

The Houston market could more than double its data center capacity by the end of 2028, a new report indicates.

The report, published by commercial real estate services provider CBRE, says greater demand for data center capacity in the Houston area is being fueled by energy companies, along with large-scale cloud services and AI-driven tenants.

In the second half of 2025, the Houston market had 154 megawatts of data center capacity, which was on par with capacity in the second half of 2024. Another 28.5 megawatts of capacity was under construction during that period.

“Multiple providers are advancing new builds and redevelopments, including significant power upgrades to recently purchased buildings, underscoring long-term confidence even as the market works through elevated vacancy and uneven absorption,” CBRE says of Houston’s data center presence.

One project alone promises to significantly boost the Houston market’s data center capacity. Data center developer Serverfarm plans to use part of a $3 billion credit facility to build a 250-acre, AI-ready data center campus near Houston with a potential capacity of more than 500 megawatts. The Houston campus and two other Serverfarm projects are already leased to unidentified tenants, according to CoStar.

A 60-megawatt, AI-ready Serverfarm data center is under construction in Houston. The $137 million, 438,000-square-foot project, located near the former headquarters of computer manufacturer Compaq, is supposed to be completed in the third quarter of 2027.

Data Center Map identifies 59 data centers in the Houston area managed by 36 operators, including DataBank, Data Foundry, Digital Realty, IBM, Logix Fiber Networks, Lumen and TRG Datacenters. That compares with more than 180 data centers in Dallas-Fort Worth, more than 50 in the San Antonio area and 40 in the Austin area.

Texas is home to more than 400 data centers, according to Data Center Map.

In November, Google said it’s investing $40 billion to build AI data centers in West Texas and the Texas Panhandle.

“This is a Texas-sized investment in the future of our great state,” Gov. Greg Abbott said when Google’s commitment was announced. “Texas is the epicenter of AI development, where companies can pair innovation with expanding energy. Google's $40 billion investment makes Texas Google's largest investment in any state in the country and supports energy efficiency and workforce development in our state.”

A new report shows that Texas data centers used 25 billion gallons of water in 2025. Photo via HARC report.

Texas data center boom could strain water supply, new report warns

thirst for data

As data centers continue to boom throughout Texas, a new report from the Houston Advanced Research Center (HARC) warns that the trend could strain the state’s water supply.

HARC estimates Texas data centers used 25 billion gallons of water in 2025—and that the demand for water will continue to rise to meet the needs of the 464 data centers currently in Texas, as well as 70 additional sites currently under development.

In the report, titled “Thirsty Data and the Lone Star State: The Impact of Data Center Growth on Texas’ Water Supply,” The Woodlands-based nonprofit says that water use for cooling data centers is expected to double or triple by 2028 on the national level. If projections hold, the total annual water use for data centers in Texas will increase by 0.5 percent to 2.7 percent by 2030, or to between 29 billion and 161 billion gallons of water consumed.

Data centers often use water for cooling, though water demand is dependent on the type of cooling used, the size and type of the data center. Although used water can be reused, some new water withdrawals are always needed to replace evaporated water and other systems’ water losses. Water is also used to cool the power plants that generate electricity used by the data centers.

The HARC report offers guidance to address the overall concerns of water demands by data centers, including:

  • Dry cooling methods
  • Increased reliance on wind and solar energy sources
  • Alternative water supplies, like treated wastewater or brackish water for cooling
  • Adjusted operating schedules to accommodate water usage
  • Partnering with local companies to develop projects that reduce water leaks
  • Companies creating their own water infrastructure investments

The report goes on to explain that the Texas State Water Plan, produced by the Texas Water Development Board, projects shortages of 1.6 trillion gallons by 2030 and 2.3 trillion gallons by 2070. HARC posits that the recent surge in water demand from AI data centers is not fully reflected in those projections.

"Texas water plans always look backward, not forward," the report reads. "That means the 2027 water plan, which is in development now, will be based on 2026 regional water plans that do not include forecasted data center water use. Data centers that began operation in 2025 will not be added to the State Water Plan until 2032."

Currently, there are no state regulations that require data centers to report how much water they use. However, the Public Utility Commission of Texas (PUC) plans to survey operators of data centers and cryptocurrency mining facilities on their water consumption, cooling methods and electricity sources this spring. It is expected to release the results by the end of the year. The companies will have six weeks to respond. The Texas Water Development Board will assist the PUCT on the questions.

“I think we all recognize the importance of data centers and the technology they support and what they give to our modern-day life,” PUC Commissioner Courtney Hjaltman said during the last commission meeting. “Texans, regulators and the legislature really need that understanding of data centers, really need to understand the water they’re using so that we can plan and create the Texas we want.”

See the full HARC report here.
Hadi Ghasemi, a University of Houston professor, has uncovered a method to release heat from data centers and electronics at record performance. Photo courtesy UH.

Houston researcher develops efficient method to cool AI data centers

cool findings

A University of Houston professor has developed a new cooling method that can remove heat at least three times more effectively from AI data centers than current technologies.

Hadi Ghasemi, a distinguished professor of Mechanical & Aerospace Engineering at UH, published his findings in two articles in the International Journal of Heat and Mass Transfer. The findings solve a critical issue in the growing AI sector, according to UH.

High-powered AI data centers generate huge amounts of heat due to the GPU and operating systems they use with extreme power densities, which introduce complex thermal challenges. Traditionally, cooling methods, like microchannels, which use flow and spray cooling, have had limitations when exposed to extreme heat flux, according to UH.

Ghasemi’s research, however, found a more effective way to design thin-film evaporation structures to release heat from data centers and electronics at record performance.

Ghasem’s solution coupled topology optimization and AI modeling to determine the best shapes for thin film efficiency, ultimately landing on a branch-like structure—resembling a tree.

The model found that the “branches” needed to be about 50 percent solid and 50 percent empty space for optimum efficiency, and that they could sustain high heat fluxes with minimal thermal resistance.

“These structures could achieve high critical heat flux at much lower superheat compared to traditionally studied structures,” Ghasemi said in a news release. “The new structures can remove heat without having to get as hot as previous removal systems.

Ghasemi’s doctoral candidates, Amirmohammad Jahanbakhsh and Saber Badkoobeh Hezave, also worked on the project. The team believes their results show the impact of a physics-aware, AI design and can help ensure reliability, longevity and stability of AI data centers.

“Beyond achieving record performance, these new findings provide fundamental insight into the governing heat-transfer physics and establishes a rational pathway toward even higher thermal dissipation capacities,” Ghasemi added in the release

A new report shows the role Texas could play as the data-center sector enters "hyperdrive." Photo via JLL.com.

Texas could topple Virginia as biggest data-center market by 2030, JLL report says

data analysis

Everything’s bigger in Texas, they say—and that phrase now applies to the state’s growing data-center presence.

A new report from commercial real estate services provider JLL says Texas could overtake Northern Virginia as the world’s largest data-center market by 2030. Northern Virginia is a longtime holder of that title.

What’s driving Texas’ increasingly larger role in the data-center market? The key factor is artificial intelligence.

Companies like Google and Microsoft need more energy-hungry data centers to power AI innovations. In a 2023 article, Forbes explained that AI models consume a lot of energy because of the massive amount of data used to train them, as well as the complexity of those models and the rising volume of tasks assigned to AI.

“The data-center sector has officially entered hyperdrive,” Andy Cvengros, executive managing director at JLL and co-leader of its U.S. data-center business, said in the report. “Record-low vacancy sustained over two consecutive years provides compelling evidence against bubble concerns, especially when nearly all our massive construction pipeline is already pre-committed by investment-grade tenants.”

Dallas-Fort Worth has long dominated the Texas data-center market. But in recent years, West Texas has emerged as a popular territory for building data-center campuses, thanks in large part to an abundance of land and energy. Nearly two-thirds of data-center construction underway now is happening in “frontier markets” like West Texas, Ohio, Tennessee and Wisconsin, the JLL report says.

Northern Virginia, the current data-center champ in the U.S., boasted a data-center market with 6,315 megawatts of capacity at the end of 2025, the report says. That compares with 2,423 megawatts in Dallas-Fort Worth, 1,700 megawatts in the Austin-San Antonio corridor, 200 megawatts in West Texas, and 164 megawatts in Houston.

Musk has vowed to upend another industry. Photo via Getty Images

Elon Musk vows to put data centers in space and run them on solar power

Outer Space

Elon Musk vowed this week to upend another industry just as he did with cars and rockets — and once again he's taking on long odds.

The world's richest man said he wants to put as many as a million satellites into orbit to form vast, solar-powered data centers in space — a move to allow expanded use of artificial intelligence and chatbots without triggering blackouts and sending utility bills soaring.

To finance that effort, Musk combined SpaceX with his AI business on Monday, February 2, and plans a big initial public offering of the combined company.

“Space-based AI is obviously the only way to scale,” Musk wrote on SpaceX’s website, adding about his solar ambitions, “It’s always sunny in space!”

But scientists and industry experts say even Musk — who outsmarted Detroit to turn Tesla into the world’s most valuable automaker — faces formidable technical, financial and environmental obstacles.

Feeling the heat

Capturing the sun’s energy from space to run chatbots and other AI tools would ease pressure on power grids and cut demand for sprawling computing warehouses that are consuming farms and forests and vast amounts of water to cool.

But space presents its own set of problems.

Data centers generate enormous heat. Space seems to offer a solution because it is cold. But it is also a vacuum, trapping heat inside objects in the same way that a Thermos keeps coffee hot using double walls with no air between them.

“An uncooled computer chip in space would overheat and melt much faster than one on Earth,” said Josep Jornet, a computer and electrical engineering professor at Northeastern University.

One fix is to build giant radiator panels that glow in infrared light to push the heat “out into the dark void,” says Jornet, noting that the technology has worked on a small scale, including on the International Space Station. But for Musk's data centers, he says, it would require an array of “massive, fragile structures that have never been built before.”

Floating debris

Then there is space junk.

A single malfunctioning satellite breaking down or losing orbit could trigger a cascade of collisions, potentially disrupting emergency communications, weather forecasting and other services.

Musk noted in a recent regulatory filing that he has had only one “low-velocity debris generating event" in seven years running Starlink, his satellite communications network. Starlink has operated about 10,000 satellites — but that's a fraction of the million or so he now plans to put in space.

“We could reach a tipping point where the chance of collision is going to be too great," said University at Buffalo's John Crassidis, a former NASA engineer. “And these objects are going fast -- 17,500 miles per hour. There could be very violent collisions."

No repair crews

Even without collisions, satellites fail, chips degrade, parts break.

Special GPU graphics chips used by AI companies, for instance, can become damaged and need to be replaced.

“On Earth, what you would do is send someone down to the data center," said Baiju Bhatt, CEO of Aetherflux, a space-based solar energy company. "You replace the server, you replace the GPU, you’d do some surgery on that thing and you’d slide it back in.”

But no such repair crew exists in orbit, and those GPUs in space could get damaged due to their exposure to high-energy particles from the sun.

Bhatt says one workaround is to overprovision the satellite with extra chips to replace the ones that fail. But that’s an expensive proposition given they are likely to cost tens of thousands of dollars each, and current Starlink satellites only have a lifespan of about five years.

Competition — and leverage

Musk is not alone trying to solve these problems.

A company in Redmond, Washington, called Starcloud, launched a satellite in November carrying a single Nvidia-made AI computer chip to test out how it would fare in space. Google is exploring orbital data centers in a venture it calls Project Suncatcher. And Jeff Bezos’ Blue Origin announced plans in January for a constellation of more than 5,000 satellites to start launching late next year, though its focus has been more on communications than AI.

Still, Musk has an edge: He's got rockets.

Starcloud had to use one of his Falcon rockets to put its chip in space last year. Aetherflux plans to send a set of chips it calls a Galactic Brain to space on a SpaceX rocket later this year. And Google may also need to turn to Musk to get its first two planned prototype satellites off the ground by early next year.

Pierre Lionnet, a research director at the trade association Eurospace, says Musk routinely charges rivals far more than he charges himself —- as much as $20,000 per kilo of payload versus $2,000 internally.

He said Musk’s announcements this week signal that he plans to use that advantage to win this new space race.

“When he says we are going to put these data centers in space, it’s a way of telling the others we will keep these low launch costs for myself,” said Lionnet. “It’s a kind of powerplay.”

Ad Placement 300x100
Ad Placement 300x600

CultureMap Emails are Awesome

Baker Hughes teams up with Google and XGS on energy tech

project partners

Houston-based energy technology company Baker Hughes recently forged two significant partnerships—one with tech titan Google and another with geothermal power startup XGS Energy.

Under the Google Cloud partnership, announced at CERAWeek 2026, Baker Hughes technology will be paired with Google Cloud AI and data analytics to improve the performance of AI data centers’ power systems and energy-transfer machinery. Furthermore, the two companies will explore opportunities for data centers to extract greater value from underused industrial and operational data.

“Infrastructure that powers the growing demand for AI and cloud computing is becoming one of the most critical drivers of global electricity needs,” Lorenzo Simonelli, chairman and CEO of Baker Hughes, said in the announcement.

“Through this partnership with Google Cloud, we are bringing together world-class power technologies and digital capabilities to help data center operators improve efficiency, enhance reliability, and accelerate progress toward lower-carbon operations,” he added.

Through the XGS partnership, Baker Hughes will provide engineering services for XGS’ 150-megawatt geothermal project in New Mexico. The project will supply energy to the Public Service Co. of New Mexico grid in support of New Mexico data centers operated by Meta Platforms, the parent company of Facebook and Instagram.

“With this single project for Meta in New Mexico, XGS will increase the state’s operating geothermal capacity by tenfold,” says Ghazal Izadi, chief operating officer at XGS.

“Geothermal energy plays a vital role in delivering reliable, cleaner power at scale,” added Maria Claudia Borras, chief growth and experience officer and interim executive vice president of industrial and energy technology at Baker Hughes. “By collaborating with XGS at this early stage, we are applying our ground‑to‑grid capabilities to reduce technical risk, accelerate reservoir validation, and engineer an integrated solution to deliver … power efficiently and reliably.”

California-headquartered XGS, which has a major presence in Houston, is known for its proprietary solid-state geothermal system that uses thermally conductive materials to deliver affordable energy wherever there is hot rock.

TotalEnergies strikes $1B federal deal to exit offshore wind sector

canceled projects

TotalEnergies, a French company whose U.S. headquarters is in Houston, has agreed to redirect nearly $930 million in capital from two offshore wind leases on the East Coast to oil, natural gas and liquefied natural gas (LNG) production.

In its agreement with the U.S. Department of the Interior, TotalEnergies has also promised not to develop new offshore wind projects in the U.S. “in light of national security concerns,” according to a department press release.

Federal agency hails ‘landmark agreement’

The Department of the Interior called the deal a “landmark agreement” that will steer capital “from expensive, unreliable offshore wind leases toward affordable, reliable natural gas projects that will provide secure energy for hardworking Americans.”

Renewable energy advocates object to what they believe is the Trump administration’s mischaracterization of offshore wind projects.

Under the Department of the Interior agreement, the federal government will reimburse TotalEnergies on a dollar-for-dollar basis for the leases, up to the amount that the energy company paid.

“Offshore wind is one of the most expensive, unreliable, environmentally disruptive, and subsidy-dependent schemes ever forced on American ratepayers and taxpayers,” Interior Secretary Doug Burgum said in the announcement. “We welcome TotalEnergies’ commitment to developing projects that produce dependable, affordable power to lower Americans' monthly bills while providing secure U.S. baseload power today — and in the future.”

TotalEnergies cites U.S. policy in move away from U.S. wind power

In the news release, Patrick Pouyanné, chairman and CEO of TotalEnergies, says the company was “pleased” to sign the agreement to support the Trump administration’s energy policy.

“Considering that the development of offshore wind projects is not in the country’s interest, we have decided to renounce offshore wind development in the United States, in exchange for the reimbursement of the lease fees,” Pouyanné says.

TotalEnergies redirects capital to LNG, oil, and natural gas

TotalEnergies will use the $928 million it spent on the offshore wind leases for development of a joint venture LNG plant in the Rio Grande Valley, as well as for production of upstream oil in the Gulf of Mexico and for production of shale gas.

“These investments will contribute to supplying Europe with much-needed LNG from the U.S. and provide gas for U.S. data center development. We believe this is a more efficient use of capital in the United States,” Pouyanné says.

TotalEnergies paid $133.3 million for an offshore wind lease at the Carolina Long Bay project off the coast of North Carolina and $795 million in 2022 for a lease covering a 1,545-megawatt commercial offshore wind facility off the coast of New Jersey.

“TotalEnergies’ studies on these leases have shown that offshore wind developments in the United States, unlike those in Europe, are costly and might have a negative impact on power affordability for U.S. consumers,” TotalEnergies said in a company-issued press release. “Since other technologies are available to meet the growing demand for electricity in the United States in a more affordable way, TotalEnergies considers there is no need to allocate capital to this technology in the U.S.”

Since 2022, TotalEnergies has invested nearly $12 billion to promote the development of oil, LNG, and electricity in the U.S. In 2025, TotalEnergies was the No. 1 exporter of LNG from the U.S.

Industry groups push back on offshore wind pullback

The American Clean Energy Association has pushed back on the Trump administration’s characterization of offshore wind projects.

“The offshore wind industry creates thousands of high-quality, good-paying jobs, and is revitalizing American manufacturing supply chains and U.S. shipyards,” Jason Grumet, the association’s CEO, said in December after the Trump administration paused all leases for large-scale offshore wind projects under construction in the U.S. “It is a critical component of our energy security and provides stable, domestic power that helps meet demand and keep costs low.”

Grumet added that President Trump’s “relentless attacks on offshore wind undermine his own economic agenda and needlessly harm American workers and consumers.” He called for passage of federal legislation that would prevent the White House “from picking winners and losers” in the energy sector and “placing political ideology” above Americans’ best interests.

The National Resources Defense Council offered a similar response to the offshore wind leases being paused.

“In its ongoing effort to prop up waning fossil fuels interests, the administration is taking wilder and wilder swings at the clean energy projects this economy needs,” said Pasha Feinberg, the council’s offshore wind strategist. “Investments in energy infrastructure require business certainty. This is the opposite. If the administration thinks the chilling impacts of this action are limited to the clean energy sector, it is sorely mistaken.”

Houston scientists' breakthrough moves superconductivity closer to real-world use

energy breakthrough

University of Houston researchers have set a new benchmark in the field of superconductivity.

Researchers from the UH physics department and the Texas Center for Superconductivity (TcSUH) have broken the transition temperature record for superconductivity at ambient pressure. The accomplishment could lead to more efficient ways to generate, transmit and store energy, which researchers believe could improve power grids, medical technologies and energy systems by enabling electricity to flow without resistance, according to a release from UH.

To break the record, UH researchers achieved a transition temperature 151 Kelvin, which is the highest ever recorded at ambient pressure since the discovery of superconductivity in 1911.

The transition temperature represents the point just before a material becomes superconducting, where electricity can flow through it without resistance. Scientists have been working for decades to push transition temperature closer to room temperature, which would make superconducting technologies more practical and affordable.

Currently, most superconductors must be cooled to extremely low temperatures, making them more expensive and difficult to operate.

UH physicists Ching-Wu Chu and Liangzi Deng published the research in the Proceedings of the National Academy of Sciences earlier this month. It was funded by Intellectual Ventures and the state of Texas via TcSUH and other foundations. Chu, founding director and chief scientist at TcSUH, previously made the breakthrough discovery that the material YBCO reaches superconductivity at minus 93 K in 1987. This helped begin a global competition to develop high-temperature superconductors.

“Transmitting electricity in the grid loses about 8% of the electricity,” Chu, who’s also a professor of physics at UH and the paper’s senior author, said in a news release. “If we conserve that energy, that’s billions of dollars of savings and it also saves us lots of effort and reduces environmental impacts.”

Chu and his team used a technique known as pressure quenching, which has been adapted from techniques used to create diamonds. With pressure quenching, researchers first apply intense pressure to the material to enhance its superconducting properties and raise its transition temperature.

Next, researchers are targeting ambient-pressure, room-temperature superconductivity of around 300 K. In a companion PNAS paper, Chu and Deng point to pressure quenching as a promising approach to help bridge the gap between current results and that goal.

“Room-temperature superconductivity has been seen as a ‘holy grail’ by scientists for over a century,” Rohit Prasankumar, director of superconductivity research at Intellectual Ventures, said in the release. “The UH team’s result shows that this goal is closer than ever before. However, the distance between the new record set in this study and room temperature is still about 140 C. Closing this gap will require concerted, intentional efforts by the broader scientific community, including materials scientists, chemists, and engineers, as well as physicists.”