Field intelligence, written by operators.

Short, unvarnished analysis on the forces shaping towers, data centers, and digital infrastructure — from the people building through them. No marketing gloss. No forecasts detached from delivery.

Case Study · Carrier Colocation

Anchoring an AT&T colocation on a newly-developed rural monopole.

An illustrative use case in how a well-entitled, well-sited 180-foot monopole becomes a long-duration cash-flow asset once an anchor carrier is deployed — and why tenant two is where the tower economics really live.

The operational context.

A typical scenario in the Retrospx pipeline: a submarket where AT&T's coverage footprint has material gaps, where land is available at defensible cost, and where the local zoning framework permits a telecom-use monopole with appropriate conditional approval. The project starts with site control, followed by radio-frequency validation from the carrier, zoning application, public hearing, utility service request, and FAA/SHPO/NEPA reviews where applicable. The entitlement timeline typically runs 6–12 months depending on jurisdiction; construction adds 60–90 days; energization and commissioning adds another 30–60.

Why the structure works.

Carrier anchor leases on a new monopole typically run 10–15 years of initial term with multiple 5-year renewal options and annual escalators at fixed 3% in the U.S. market. Once the carrier installs antennas, radios, and cabinets, the operational cost of relocating their equipment is high enough that renewal becomes the strongly-preferred outcome — industry data consistently reports tower lease renewal rates above 98%. For the tower owner, this creates unusually durable, inflation-protected recurring revenue on a physical asset with a multi-decade useful life.

Where the real economics appear.

The anchor tenant covers the structural economics of the site. The second and third tenants — a second carrier, a wireless ISP, a private LTE deployment, or an IoT gateway operator — are the inflection point. In the U.S. market, colocation rents historically run in the $1,500–$3,000/month range per additional tenant depending on geography and site characteristics, with limited incremental operating cost to the tower owner. The result is that every additional tenant flows substantially to the bottom line of a site whose fixed costs were already covered by the anchor.

What it means for Retrospx.

Our portfolio is explicitly constructed around this economic logic. We don't build single-tenant sites. We build multi-carrier-ready monopoles with engineered load capacity, pre-installed conduit, and compound layouts that make tenant two and tenant three operationally simple to onboard. The discipline of that design is where a good tower business compounds into a great one.

10–15yr
Typical carrier anchor lease term
3%
Standard annual rent escalator
98%
Industry lease renewal rate
2–3x
Margin uplift from colocation density

This is a general industry-aligned use case illustrating carrier anchor economics on monopole infrastructure. Nothing in this analysis should be read as a confidential customer deployment.

Case Study · 5G Densification

T-Mobile's mid-band densification wave — and what it means for tower operators.

The carrier capex story of the last two years has been 3.5 GHz and C-band densification, not green-field builds. Operators spending ~$35B annually on adding antennas, radios, and power systems to existing structures. Here's how that reshapes the tower P&L.

Why amendment revenue matters.

Industry reporting indicates that major U.S. carriers are channeling roughly $35 billion of annual capex into mid-band densification — adding equipment to existing towers rather than acquiring entirely new sites. For tower owners, this shows up as amendment revenue: additional rent associated with added antenna weight, wind load, cabinet space, and power draw beyond what the original lease contemplated. These amendments are high-margin, quick to close relative to new leases, and structurally recurring as 5G standards continue to evolve.

The operational implication.

Owning amendment-ready towers is now a distinct competitive advantage. This means: structural engineering that comfortably supports additional loading, compound space for expanded radio cabinets, and power service that can be upgraded without a ground-up rework. Retrospx's newer monopole builds are explicitly engineered for this future — the marginal cost of overbuilding the structural and compound envelope at construction is modest; the marginal revenue from easy amendment onboarding compounds for the life of the asset.

The T-Mobile-style use case.

Where T-Mobile (or any major U.S. carrier) is executing a densification program, the carrier's site acquisition team is looking for towers that can be amended quickly, with minimal structural modification and without re-opening zoning. A tower that has been engineered for multi-tenant loading from day one, and whose compound has been laid out with future equipment in mind, reaches "amendment ready" status faster than a retrofit site. That operational readiness is, in our experience, a material factor in which towers a carrier selects for densification first.

Strategic implications.

The 5G densification wave is not a one-time event. Carriers will continue to add mid-band and eventually mmWave equipment, and as Open RAN architectures mature, the equipment profile on each tower will continue to evolve. Tower owners who treat each site as a multi-decade, multi-upgrade asset will capture the economics. Tower owners who built single-tenant, minimum-viable structures will spend the next decade retrofitting — or losing amendment opportunities to owners who didn't.

Industry-aligned analysis of mid-band densification economics. Not a representation of any specific contractual relationship.

Market Outlook · Towers

The future of the tower business — durable, but structurally shifting.

The U.S. telecom tower market is forecast to grow from roughly 165 thousand installed units in 2025 to over 200 thousand by 2030, at a mid-single-digit CAGR. But the interesting story isn't the unit count — it's where the revenue per tower is going.

The demand is still there — and it's concentrating.

Carrier spend is increasingly flowing into existing tower inventory rather than green-field builds. That is structurally good news for owners of well-located, amendment-ready sites — and structurally tough news for operators whose portfolios are concentrated in low-density rural geographies where carriers are less inclined to add capacity. Tower value, in other words, is increasingly a function of where the tower is, not how many towers you own.

Stealth and camouflage are the fastest-growing format.

Municipalities are increasingly stipulating camouflaged profiles — flagpole monopoles, tree-disguised stealth towers, rooftop concealments, and integrated architectural poles. Industry analysis projects stealth formats growing at a CAGR of nearly 7% through 2030, the fastest segment of the market. Premium rents on camouflaged sites often offset their lower antenna counts. Retrospx's development practice treats camouflage as a first-class design option, not a reluctant concession.

Convergence with edge and compute.

The tower of 2030 is not just a carrier colocation site. It's a potential edge compute node, an IoT aggregation point, a private network anchor, and — in certain geographies — a micro data center host. Owners who design sites only for 2020 carrier loading will leave these adjacent revenues to others. Owners who design for optionality will capture them.

The capital structure story.

Tower portfolios trade on EBITDA multiples that reflect their bond-like cash flow characteristics. The institutional bid for well-constructed portfolios has remained strong through every macro cycle since the asset class emerged. The durability of that bid is what gives disciplined operators — Retrospx among them — the confidence to underwrite builds to a 15-year hold horizon.

Market Outlook · Data Centers

The future of data centers is being written by power, not compute.

AI training clusters now draw 100+ MW each. Goldman Sachs projects a 165% increase in data center power demand by 2030. Morgan Stanley sees a 49 GW U.S. shortfall by 2028. The bottleneck is no longer silicon — it's electrons.

Hyperscaler capex has crossed a threshold.

Combined 2026 capex from Microsoft, Meta, Alphabet, Amazon, and Oracle is expected to exceed $600 billion — a figure that has approximately doubled in two years. Roughly 75% of that spend is AI-specific. The largest operators are no longer constrained by capital. They are constrained by the availability of power, land with usable entitlement, and engineering talent to deliver facilities on the compressed timelines their business models demand.

Interconnection queues are now the gating function.

Grid interconnection queues nationally exceed 2,100 GW — a figure larger than the entire U.S. generating capacity. PJM, the Mid-Atlantic/Midwest grid operator, recently approved projects that had been in queue for 8 years. In this environment, the operators who win are the ones who can demonstrate utility readiness at the outset of a project, not the ones who discover power constraints after they've already bought the land.

On-site and captive generation are becoming standard.

Hyperscalers are increasingly entering direct power-generation partnerships — including multi-billion-dollar clean-energy partnerships, nuclear restart agreements (Microsoft/Constellation for Three Mile Island's restart), and exploration of small modular reactors. The era of simply plugging into the local utility is ending for large-scale AI infrastructure. Captive generation is moving from exotic to expected.

Supply rationalization is underway.

Industry reporting in Q4 2025 suggested that roughly 30–50% of announced 2026 data center capacity is likely to slip into 2027–2028 as permitting, power, and supply-chain constraints force realistic rescheduling. This is not a demand problem — it is a delivery reckoning. Operators with credible power, land, and execution stories will capture the demand that materializes on time. The rest will lose it to those who can.

Implications for Retrospx.

Our data center strategy is explicitly constructed around the constraint reality: underwrite power first, land second, and only then move into entitlement and design. The projects we advance are the ones where the physics and the politics both clear. That discipline is what separates buildable pipeline from paper pipeline in the current market.

Market Analysis · Cost Pressures

What data centers actually cost to build today — and why the number keeps moving.

Capex per megawatt in new-build data centers has risen meaningfully over the past 36 months. The drivers are structural, not cyclical. Here are the inputs that have reshaped the cost curve.

Electrical infrastructure is the new line-item.

High-voltage transformers, switchgear, and the medium-voltage distribution required to feed AI-density racks have lead times measured in years, not months. Utility-grade transformers in particular have seen lead times extend to 120+ weeks in many North American markets. Every month of delay on a single critical piece of electrical gear cascades into project-wide schedule risk — and often into premium pricing to jump the queue.

Cooling architecture has changed.

Rack densities above 40kW — routine for AI training and inference workloads — increasingly require direct-to-chip liquid cooling or rear-door heat exchangers, sometimes in combination with immersion. These systems are capex-heavier than traditional air cooling but deliver meaningfully better PUE and enable the density needed for modern compute. The industry is in the middle of a cooling architecture transition, and that transition has a cost.

Land and entitlement are pricing differently.

In markets where power availability is known to exist — and is known to be usable for data center load — land pricing has increased materially. The scarce input is not really the land itself; it's land plus a credible path to energized power service. Developers who bring power first find themselves in a very different negotiating posture than those who lead with land.

Construction materials and labor.

Data center construction competes for concrete, steel, electrical labor, and specialized MEP contractors against every other infrastructure project in the market. In the hyperscale-centric submarkets, contractor capacity has become a rate-limiting resource in its own right.

The implication is simple.

Cost discipline in data center development has become a real competitive advantage. It requires underwriting the full project — power, land, construction, gear, interconnection — at today's realistic prices, not at 2022 prices, and structuring the capital to absorb overruns without derailing the deal. The operators who survive this cycle will be the ones who learned to underwrite honestly.

Supply Chain · Silicon

The chip bottleneck has moved upstream — and it's staying there.

GPU fabrication is no longer the binding constraint on AI infrastructure. The pressure has shifted to advanced packaging (CoWoS) and high-bandwidth memory (HBM). Here is what that actually means for data center procurement.

CoWoS is the new choke point.

TSMC's chip-on-wafer-on-substrate packaging — the process that bonds HBM stacks to a GPU die on a shared interposer — is reportedly fully allocated well into 2027. Capacity is growing (TrendForce projects CoWoS capacity reaching roughly 120–130 thousand wafers per month by end of 2026, up from 75k in 2025), but hyperscaler capex doubling over the same period means demand continues to outrun supply.

HBM supply is still tighter.

High-bandwidth memory — produced almost entirely by SK Hynix, Samsung, and Micron — has seen contracted volumes absorbed 12+ months out, with demand growing ~80–100% year-over-year against supply growth of 50–60%. HBM prices reportedly rose 30% in Q4 2025 alone. The capacity investment is happening, but new fabs take 18–24 months to build, equip, and qualify. The gap does not fully close before 2028–2029 at current investment rates.

Lead times for data center GPUs.

Industry reporting consistently puts current data-center GPU lead times in the 36–52 week range, with premium parts (Blackwell-class) at the long end of that range. Hyperscaler forward orders placed in 2025 are understood to have consumed most of Nvidia's allocation capacity through the end of 2026 and into 2027. For non-hyperscale buyers, the implication is stark: plan procurement on a multi-quarter horizon, diversify SKU selection where the workload permits, and accept that spot availability is no longer a reliable strategy.

What this means for data center operators.

A facility without chips is a facility without revenue. Procurement risk and infrastructure risk have become coupled in a way that they historically were not. The operators who will deliver into this window are the ones who start long-lead procurement conversations at the same time they start land and power conversations — not eighteen months later when shell construction is finishing.

The Retrospx position.

We do not buy GPUs for our tenants. But we do design our facilities and our commercial structures with chip-supply reality in mind — including phased commissioning that matches chip deliveries, flexible bay allocation for multiple tenant deployment schedules, and pre-negotiated utility service that does not become stranded if a tenant's chip delivery slips. In a constrained silicon environment, that operational flexibility is worth meaningful basis points of project IRR.

Continue the Conversation

Want to go deeper on any of these?

We're happy to walk through specific markets, project types, or deal structures. Infrastructure decisions reward real conversations over polished decks.