Posted on Thu 20 February 2014

### How Steve Perlman's "Revolutionary" Wireless Technology Works - and Why its a Bigger Deal than Anyone Realizes

Before I get into the technical evaluation of Artemis's demo videos and much-ballyhooed claims in the press, let's cut through the hype:

• Is this going to revolutionize wireless communications? While these demos alone don't necessarily demonstrate speeds beyond what is theoretically capable with LTE systems today, I think the evidence is clear that this technology can offer a solution to the "spectrum crunch" problem, so I'd say yes. From a mobile consumer's standpoint, it'll just seem like the next step in evolution from 4G to 5G - much faster, more consistent speeds, and with lower latency. Now, whether or not it actually gets deployed by carriers is another matter all together.
• Is this invention completely unique? No! Just last year, a German university demonstrated a working prototype of essentially the same technique (albeit without a snazzy streaming video demo or compatability on traditional cell phones) in the video here. The theory behind this sort of system is referred to as "network MIMO" or "cooperative MIMO" in the literature and "coordinated beamforming" in the 3GPP LTE-A specification [30], and dates back to 2001, if not earlier [1] [2]. But then again, when is an invention ever done in a vacuum? Calculus, the telephone, and the Hall-Heroult process for smelting aluminum were all discovered simultaneously. What Artemis has done is taken techniques that are being proposed for upcoming 5G systems and figured out how to solve all the engineering challenges involved, years ahead of the rest of the industry [3] [4].
• Have they broken the Shannon limit? No, they've just side-stepped it. Each user now has their own channel, and can use it up to the full Shannon limit without having to share it with anyone else. See the section of their whitepaper beginning with "Shannon’s Law is not about spectrum data rate limits, it is about channel data rate limits".

NOTE: Be sure to read the really mind-blowing implications this technology could have far beyond communication in my conclusion if you're in a hurry.

All right, now let's see how this system actually works. I'm going to start off by explaining it by analogy first, and then get into radios and information theory.

The first way to think of this is by imaginging the cocktail party problem - you're in a room with a large number of other people, and everyone's talking at the same time. Because of this interference, it's hard for you to hear the person you're having a conversation with. Humans leverage the time delay of a sound reaching each of our ears in succession to help identify the direction of each voice, and then do some sort of signal processing in the audio cortex to try to filter out the other voices not of interest to us [37]. How could you mitigate this problem? Think of your friend's voice as the signal you're trying to receive, and everyone else's voice as the noise. To tackle this problem, you could just have everyone take turns speaking - create time slots, and assign people to each slot so that at any given time, there's only one person speaking. In the radio world, that's known as time division multiple access (TDMA). You could also tell each person to speak at a different pitch, so listeners could filter out speakers they're not interested in; this is called frequency division multiple access (FDMA). If you could have each person speaking in a different language, so you could just try to listen to the one who's speaking in your language. This is known as code division multiple access (CDMA). Finally, by everyone speaking softly or using cones in front of their mouth (or ears for listeners), you could limit or focus your sound waves to a particular region of space - this is space division multiple access (SDMA).

Another analogous situation is that of trying to eliminate noise from reaching your ears. Amar Bose was frustrated by the roar of jet engines overwhelming the music he was trying to listen to on a plane, so he invented noise-cancelling headphones. These use microphones to pick up the background noise you're trying to get rid of, and generate a sound waveform that matches it in amplitude and frequency, but just shifted in phase by 180 degrees. This ensures that when this waveform is sent out through your headphones and combines with the background noise, they destructively interfere, as in this image.

The coolest example of interference cancellation is in modern telescopes that have adaptive optics systems. Here's the problem: you're trying to get your billion-dollar telescope to get clear, sharp pictures of the night sky, but turbulence in the atmosphere keeps distorting the light you receive from the stars. Is there any way you could cancel it out? Physicist Horace Babcock came up with the idea back in 1953 [21]. After funding from the military in the late 1980s (who were interested in using the idea for the inverse problem - improving spy satellites looking down, instead of telescopes looking up), it is now utilized in a number of large telescopes around the world, including the Palomar observatory, the Calypso instrument at Kitt Peak, the Keck observatory at Mauna Kea, the Gran Telescopio Canarias, and the Very Large Telescope in Chile. The principle of adaptive optics is based on phase conjugation, that is the reversal of the light’s phase to cancel out the distortion; as long as corrections can be made at higher than the Nyquist frequency, the telescopes can approach their diffraction limits. The laser beam you're seeing (a laser guide star at an altitude of 95 km) is used to calculate the noise the atmosphere is introducing into the optics, using a high-powered (~12 W), continuous wave sodium laser.

The closest analogy, though, is that of vectored digital subscriber lines (DSL) which brings high-speed Internet to homes and businesses over their legacy analog phone lines without having to go through the expensive proposition of laying down new fiber optic cables. The problem with regular DSL is that phone lines cause interference to each other, causing the speeds to deteriorate. What Prof. John Cioffi at Stanford realized was that by measuring the interference, and adjusting for it to cancel out the noise, you could improve speeds dramatically. This pre-compensation adjustment is known as precoding. With two lines, the computation is not that intensive, but when you have to do this simultaneously for even 200 lines you need to do over 2.5 trillion calculations per second, generating around 80 Gpbs of vectoring data, with each line's symbols required to be synchronized to within under a millisecond [5]. What Cioffi did for DSL, Perlman is now doing for 4G.

What you'll notice that all these approaches have in common is a closed feedback loop. This allows the algorithm to continuously track the noise in the system and adapt to it in near-realtime. With network MIMO, you're trying to exploit the interference of waves to your advantage, not avoid it by using a different frequency or limiting your power output. The difference between network MIMO and adaptive optics telescopes is that in the latter, we're correcting for the noise at the receiver, not the transmitter. In addition, with radios spread about over space, we have a distributed system. The laser guide star is just a way to measure noise in the system, similar to how radios in a network MIMO system will attempt to measure the channel state information. The foundation for pretty much all the modern work in interference cancellation for radio frequencies is a seminal 1983 paper written by Costa entitled "Writing on Dirty Paper". As the title suggests, think of a situation where you have a message being passed between two people on a piece of paper. Unfortunately, this paper has a bunch of random dots placed on it, and the question is - how much information can you pass with this "noise" (dots) on your "channel" (paper) compared to if there was no noise? The surprising conclusion was that the noise does not affect the channel capacity at all, assuming that the noise is known to the transmitter. The final channel capacity is:

where P is the transmit power and N is the noise unknown to the transmitter [6].

In practice, this "interference presubtraction" done in dirty paper coding is too computationally expensive to do [7] [8]. There are quite a few alternative precoding techniques (both linear and non-linear), each of which has its own pros and cons [9] [10]. We'll start by going through two of the most common choices, both of which are mentioned in the relevant Artemis patents [31] [32] [33] [34] (though are obviously much simplified, and only approach the performance of dirty paper coding in some cases).

The first approach is called zero force precoding, the goal of which is to choose weight to "cancel the interference among user streams". These are also known as the precoding/beamforming vectors. We're going to use a matrix-vector model to represent our system, where:

• there are k users
• x represents a vector of the bits of data we want to send to the k-th user. For example, if we want to send the word "android", we would convert each letter to its ASCII code, the first of which would be a=97, or 1100001 in binary.
• n represents a vector of noise (assumed to be AWGN)
• y represents a vector of the data received by the k-th mobile phone
• H[mxn] representing the channel matrix, where there are M transmitting antennas and N receiving antennas. Each element in the matrix is a complex number "whose magnitude and phase represent signal attenuation and delay from sender antenna i to receiver antenna j" [11].

The equation then becomes:

If we setup two transmitters and two corresponding receivers (labelled 1 & 2), each sending its own stream of data (x1 and x2, respectively), we get the following equations for the received signal at each mobile phone (ignoring the noise term):

• for mobile phone 1:
${y}_{1}\text{}=\text{}{H}_{11}{x}_{1}+{H}_{21}{x}_{2}$
• for mobile phone 2:
${y}_{2}\text{}=\text{}{H}_{12}{x}_{1}+{H}_{22}{x}_{2}$

In other words, each mobile phone receives a mix (linear combination) of both x1 and x2 - crosstalk interference, not what we want [13].

Now if the transmitter knows the channel matrix H, it can multiply the inverse of H with the data vector, and send that to each receiver. Each receiver then only receives its own data, without the other's interfering. Let's see a concrete example (for simplicity, we're keeping the numbers in the channel matrix real, and using decimal numbers for the signal instead of binary ones):

• Let:
$x\text{}=\left(\begin{array}{c}9\\ -0.5\end{array}\right)$
• Let:
$H\text{}=\text{}\left(\begin{array}{cc}2& 3\\ -1& 0\end{array}\right)$
• we calculate the inverse of H, let's call it G:
$G\text{}=\left(\begin{array}{cc}0& -1\\ \frac{1}{3}& \frac{2}{3}\end{array}\right)$
• as expected, H * G equals the identity matrix.
• without zero force precoding, mobile phone 1 and 2 receive incorrect values:
${y}_{1}\text{}=\text{}{H}_{11}{x}_{1}+{H}_{21}{x}_{2}\text{}=\text{}\left(2\right)*\left(9\right)\text{}+\text{}\left(3\right)*\left(-\text{.5}\right)\text{}=\text{}16.5$ $\text{}$
${y}_{2}\text{}=\text{}{H}_{12}{x}_{1}+{H}_{22}{x}_{2}\text{}=\text{}\left(-1\right)\text{}*\text{}\left(9\right)\text{}+\text{}\left(0\right)\text{}*\text{}\left(-\text{.5}\right)\text{}=\text{}-9$
• with precoding, mobile phone 1 and 2 receive the correct values:
${y}_{1}\text{}=\text{}GH{x}_{1}+GH{x}_{2}\text{}=\text{}\left(1\right)\text{}*\text{}\left(9\right)\text{}+\text{}\left(0\right)\text{}*\text{}\left(-\text{.5}\right)\text{}=\text{}9$
$\text{}$ ${y}_{2}\text{}=\text{}GH{x}_{1}+GH{x}_{2}\text{}=\text{}\left(0\right)\text{}*\text{}\left(9\right)\text{}+\text{}\left(1\right)\text{}*\text{}\left(-\text{.5}\right)\text{}=\text{}-\text{.5}$

This zero-forcing precoding method works, but is a sub-optimal solution, and in practice there are some other issues to take into consideration [14] [11].

The second approach is called block diagonalization (BD) using singular value decomposition (SVD). Wow, that's a mouthful. "Decomposition" just means that we're taking a matrix, and factoring it into the product of multiple matrices. Block diagonization means we want a matrix where only the values along the diagonal are non-zero, and everywhere else 0. This makes sense when you think about the model as [15]:

Basically, we want to somehow transform our signal so that we end up with non-zero values only along the diagonal for the data each user is supposed to get (corresponding to the signal from antenna i to mobile j, where i=j), and zero values everywhere else (no interference for i not equal to j). This is equivalent to constraining the middle term (inter-cell or multi-user interference) in the equation above to 0. This matrix W is going to serve as our precoding matrix, so now the signal received by user k is:

Great, so how do we find this magical precoder matrix W? Well, because we've forced the middle term in the equation to be zero, the precoder matrix W has to lie in the null space of our channel matrix H in order for this to be satisfied. Now, as it turns out, the SVD of our channel matrix H will give us something like the following (all the four fundamental subspaces of a matrix):

You'll notice that the singular values of the matrix H lie all along the diagonal - just like we wanted. And we've also got an orthonormal basis for the nullspace of our matrix H now. To get the precoder matrix W, we select the basis defined by the row space of H, which represents "the transmission vectors which maximize the information rate of the user" [16] [15] [17] [18]. In other words, the rows of V become the columns of our precoder matrix W:

Essentially what we're doing here is just a linear transformation. When H is a square matrix with real values, you can think of the matrices U and V from the SVD as "rotations" and the matrix Σ as a scaling matrix. Let's try this with a concrete example of a 2x2 channel matrix H, this time with complex values. Because the channel matrix H is complex, the conjugate transpose of this matrix is the Hermitian, so we'll replace the superscript T with H in the formulas that follow:

$H\text{}=\text{}\left(\begin{array}{cc}0.9\text{ }+0.1\text{}i& 0.2\text{ }-0.3i\\ -0.3+0.5\text{}i& -0.7-0.1i\end{array}\right)$
$x\text{}=\left(\begin{array}{c}-5\\ 0.3\end{array}\right)$

The SVD of our channel matrix can be calculated like this (I'm using Mathematica, but you could also use the scipy or numpy packages in Python):

{u, w, v} = SingularValueDecomposition[(H)], where u, w, v represent matrices U,Σ, and V, respectively

$u\text{}=\left(\begin{array}{cc}-0.752+0i& 0.659\text{ }+0i\\ 0.525\text{ }-0.397i& 0.599\text{ }-0.454\text{}i\end{array}\right)$
$w\text{}=\left(\begin{array}{cc}1.1457& 0\\ 0& 0.6909\end{array}\right)$
$v\text{}=\left(\begin{array}{cc}-0.902-0.05936i& 0.2692\text{ }-0.332i\\ -0.4175+0.09197i& -0.351+0.833i\end{array}\right)$

Now, we send the signal (this step is known as transmit precoding) encoded as:

$\mathrm{sent}\text{}=\text{}v\text{}.\text{}x\text{}=\left(\begin{array}{c}4.591\text{ }+0.1972i\\ 1.982\text{ }-0.2099i\end{array}\right)$

$\mathrm{received}\text{}=\text{}H\text{}.\text{}\mathrm{sent}\text{}=\left(\begin{array}{c}4.4457\text{ }-2.22×{10}^{-16}i\\ -2.884+2.185i\end{array}\right)$

And decode the signal (known as receiver shaping) by being passed U hermitian and the inverse of the singular value matrix Σ (on some separate, low-bandwidth channel I assume), which we invert by. Matrices U and V are unitary, which is why this swapping around of V and U with their hermitians works (as it equals the identity matrix). Note that the "." symbol in Mathematica stands for matrix multiplication:

$U\mathrm{hermitian}\text{}=\left(\begin{array}{cc}-0.752+1.8125×{10}^{-17}i& 0.525+0.3978\text{}i\\ 0.6589\text{ }+4.82×{10}^{-17}i& 0.5995\text{ }+0.454i\end{array}\right)$
$w\mathrm{inverse}\text{}=\left(\begin{array}{cc}0.8728& 0\\ 0& 1.447\end{array}\right)$
$\mathrm{decoded}\text{}=\text{}w\mathrm{inverse}\text{}.\text{}u\mathrm{hermitian}\text{}.\text{}\mathrm{received}\text{}=\text{}\left(\begin{array}{c}-5+4.381×{10}^{-16}\text{}i\\ 0.3\text{ }+9.839×{10}^{-17}\text{}i\end{array}\right)$

We've recovered our original signal, ignoring the rounding errors in the complex values, which are essentially 0. The SVD of the channel matrix can "be seen as the separation of the MIMO channel into two crosstalk-free transmission channels with transmission coefficients" equal to the singular values that lie along the diagonal of the Σ matrix; in other words, it is split into multiple parallel single input single output (SISO) channels. The maximum number of independent channels you can create is equal to the number of singular values in the Σ matrix [19].

How can we intuitively understand this?
"Transmit signal vector s is first transformed using matrix V into the orthogonal space expanded by Σ. The transformed signal vector components are transmitted via singular value matrix Σ. Matrix U[hermitian] is then used for the reverse transformation to the original signal vector space. As a result, the crosstalk does not “disappear” from the signal path in this transformed model, but rather is simply hidden away in transformation matrices V[hermitian] and U." [12]

How cool is that? The matrices V and U hermitian are the pre-steering matrix and the beamsteering matrix, at the transmitter and receiver, respectively [36]. If you think about it, the only real way you can ensure each mobile phone gets their own signal, without each phone interfering with every other phone, is to have the radio beams narrowly focused on each mobile - sort of like a spotlight. That's what beamforming does in regular MIMO radios, like this image (the difference in network MIMO is that the transmitter antennas are spaced far apart, so they wouldn't be emanating from a single point):

The detailed picture of how Artemis works is the following set of procedures [31]:

1. Transmitter sends training signal for frequency offset estimation
2. User(s) estimate the frequency offset and sends feedback
3. Transmitter pre-compensates for frequency offset and sends precoded training signal for channel estimation
4. User(s) estimate the channel and send channel state information (CSI) to the transmitter
5. Transmitter estimates the phase offset and computes the precoded weights based on the CSI
6. Transmitter sends precoded data to user(s)
7. User(s) demodulate data

Now that we've described the system, you may have some further questions, like:

• How is this different from multi-user MIMO (MU-MIMO)? While network MIMO uses similar algorithms as MU-MIMO [35], it's not the same thing. Regular MIMO works by having multiple antennas on both the transmitter and receiver, allowing simultaneous transmissions at full capacity on the same frequency. But this can only get us so far - each additional antenna imposes extra power, cost, and space to the radio devices, which is why most WiFi routers today stop at about 4 antennas. Network MIMO is fundamentally different - essentially SDMA taken to its ultimate conclusion (individual beams for each user); SDMA is how the industry up to date has managed to increase wireless bandwidth exponentially for decades [20]. So unlike MU-MIMO where the multiple antennas are all in one location, with network MIMO the antennas are spread out over a large area:

"practical implementations of MU-MIMO techniques... have yielded up to a ~3x increase (with four transmit antennas) in DL data rate via space division multiple access (SDMA). A key limitation of MU-MIMO schemes in cellular networks is lack of spatial diversity on the transmit side. Spatial diversity is a function of antenna spacing and multipath angular spread in the wireless links. In cellular systems... transmit antennas at a base station are typically clustered together and placed only one or two wavelengths apart due to limited real estate on antenna support structures... and due to limitations on where towers may be located" [33].

• Will it work if you're moving at high-speed, in a car or a train? Great question. What we're concerned about here is channel fading due to the Doppler effect of a mobile phone in motion which will cause the channel to change every millisecond or so (this is called the coherence time of the wave, just like in the case of adaptive optics telescopes). In their patent application [26] specifically tackling this issue, it's stated that "at the carrier frequency of 400 MHz, networks with latency of about 10 msec (i.e., DSL) can tolerate clients' velocity up to 8 mph (running speed), whereas networks with 1 msec latency (i.e., fiber ring) can support speed up to 70 mph (i.e., freeway traffic)". Previous considerations of such a system had assumed this was infeasible:

"For systems with a very large number of jointly processed antennas and targeting mobile cellular communications, the centralized computation of the precoding matrix, of the precoded based band signals, and distribution of these signals to all the antennas would require a large delay, which is incompatible with the short channel coherence time due to user mobility" [27].

• Will it work for WiFi as well? It's protocol agnostic, so it could work in unlicensed spectrum as well. The issue is that you don't have complete control over all the other transmitters, so you can't coordinate them.

• Does this work for just downloading, or uploading as well? It should work for both - the uplink is just the inverse problem. In the literature the downstream transmission technique is called "joint transmission" and the upstream is called "joint detection" [25]. In case time-division duplexing is being utilized, you could even use reciprocity to help deduce the channel matrix in one direction from the other, as they tend to be correlated.

• How much overhead is involved in gathering the channel state information (CSI)? Don't know. It could be significant [24], but doesn't appear to be from the presentation.

• How much backhaul is required to support this? We don't have any figures yet, but it could be significant [23].

• Does it require full and perfect channel state information to be known? At least at the transmitter, but even if it's not perfect there are statistical techniques to deal with it.

• They've gone to great lengths to make it backwards compatible with LTE, and are probably using the built-in mechanisms LTE has for sending feedback to the transmitters.
• They've optimized the algorithm to scale linearly (O(n) instead of O(n^3) for SVD in general) on commodity hardware (2 8-core servers for the demo), not FPGAs or custom silicon.
• Their algorithms are apparently robust to imperfect channel state information, with low overhead for the feedback signal from the mobile phones.
• They've figured out how to get extremely tight time synchronization indoors between the radios (no GPS reception indoors, so you'd probaby have to use the precision time protocol to get sub-millisecond accuracy, with one pCell with GPS nearby outdoors to serve as the master).
• Each pCell radio uses just a mW of power, two orders of magnitude less than usual WiFi routers (you do need a multidue of pCell radios, though, but still)
• The flexibility of the architecture allows serendipitous placement of pCells, which in turn means you can put them within line-of-sight of each other, using microwave backhaul as opposed to fiber (which is faster to setup).
• Interference cancellation allows you to use lower-frequency radio waves with better propogation characteristics (ie longer range). Until now, governments have had to limit the power output of radio antennas so as not to cause undue interference to other users of the spectrum, especially when you're using a frequency that can travel very far (amateur radio, AM, and TV white space frequencies, etc). Note that the 1 millisecond latency they refer to is the overhead of doing signal processing, not the round-trip time from the base station to the cloud server and back (which would be ~ 100 milliseconds).

So how soon can we expect this to be adopted by the industry? Though we're told that trials in a single city with a major mobile carrier should be expected in 2014, with a larger rollout in 2015, the diffusion of innovations teaches us that the best technology doesn't always win, and adoption certainly doesn't happen overnight. Smart antennas have been around for 20 years, but have not been adopted by cell phone carriers. Mobile carriers would need to spend billions of dollars on new base station radios, a massive cloud (C-RAN ) infrastructure, and tons of backhaul fiber to upgrade their networks to support this, which could easily take years:

"Depending on the existing infrastructure of a mobile operator, both backhaul capacity and latency requirements of some CoMP schemes may be the main cost drivers or potential show stoppers on the roadmap towards CoMP." [30] [CoMP is Coordinated Multi-Point, a proposed 5G technology for doing network MIMO]

Mobile carriers are primarily concerned with maintaining, if not increasing, their average revenue per user (ARPU), and love charging you by MB of data consumed. So data prices are only going to drop if they can maintain this revenue, or there's a ton of competition and they keep undercutting each other in a race to the bottom. With this technology, it's conceivable that 5G wireless could displace both cable and DSL connections within a few years, as is claimed in the presentation. Historical trends like this exponential growth in Internet bandwidth [22] (Gilder's law, increasing by 50% every year) seem to be driving a drop in cost/bit, though prepaid data is still quite pricey around the world; I made a quick chart of mobile broadband prices, speed, and internet penetration around the world in Tabealu using data from the ITU, Akamai, and Wikipedia.

But here's the real doozie: The last slide of the presentation at Columbia says the following:

• "pCell technology is not just limited to communications"
• the "synthesis of a tiny radio-wave bubble in real time software opens up a new wave of applications"
• "the really radical announcements are yet to come"
• "hint: we showed one in the intro video"

What in the world is he talking about? "What else is radio used for besides communication?" I asked myself. Nothing, besides radio astronomy. But then I asked myself "What else could radio be used for?" and the answer became clear: wireless power transmission! You see, while Tesla's idea of wireless power transmission never got to fruition, using microwave beams to transfer electricity between two places within line-of-sight distance of each other is nothing new - William Brown demonstrated a wirelessly-powered helicopter using microwaves beamed from the ground back in the 60s (notice the tether in the picture, to keep the helicopter positioned in the right place) [28] [29]. It was replicated recently by the BBC science show Bang Goes the Theory in their video. You just have an array of rectennas at the receiver to convert the received RF energy to DC power. But it's never been feasible or practical for any real-world applications because:

• You would need to keep steering the transmitting antenna to keep it pointed towards the receiver (which, for all the interesting applications, is mobile). Steering antennas requires either an expensive gimbal mechanism, or an even more expensive phased array. Kymeta's metamaterial antennas could dramatically reduce this cost, but it's in general hard to do.
• The amount of power you're transmitting through the single antenna is far above what is considered safe, should any living being pass through the beam.
• The inability to focus the microwave beam tightly means a lot of energy is wasted, and the receiver cannot be near any life forms.

But with Artemis:

• You could use beamforming instead of beamsteering, eliminating the cost of traditional approaches.
• The power being transmitted would be split among hundreds of antennas, each of which is individually not transmitting harmfully high levels of RF energy. At the location of the receiver constructively interferes to add up to the necessary power required, and everywhere else the radio waves just add up to noise.
• The ability to focus the radio waves to a sphere of energy just a cm in diameter means potentially that little energy will be wasted, and it's safe to use around humans (assuming the receiver itself is a cm thick).

My brain almost exploded when I realized this. While 5G is a big leap in performance from existing 4G technology, it doesn't provide any fundamentally new capabilities to us. Wireless power, though would be a total game-changer. What would the implications be?

• Consumer electronics that never need to be plugged in again - phones, tablets, laptops, televisions could all be powered wirelessly in the home and office.
• With transmission towers spaced every kilometer along major highways, electric cars would not need massive, expensive batteries. Everyone could afford a Tesla, and the demand for oil would drop.
• With transmitters on a few rooftops in a city, you could have drones and quadcopters delivering groceries and mail, again without heavy batteries that limit their flying time.
• You could build an electrical grid that's a wireless mesh network, especially in developing countries, and have excess power from solar panels beamed to other locations which need it.
• There are probably a slew of other ideas that I haven't even considered - readers, please comment below!

Is there any evidence to substantiate this hypothesis?

This approach looks more promising than others working in the wireless power area, such as Cota, uBeam, and WiTricity. If this works, Perlman and his team would go down in history as some of the greatest inventors ever, right next to Marconi and Tesla. Let's just hope they're not too ahead of their time, because "that's an expensive place to be".

The real wireless revolution here is not communication - it's power. And it's just getting started...

Citations & References:

Many thanks to my electrical engineering friends in the wireless industry and academia for reading over this draft - if you find any errors, please let me know.

 [10] OVERCOMING INTERFERENCE IN SPATIAL MULTIPLEXING MIMO CELLULAR NETWORKS (Prof. Heath has published research with Dr. Forenza, Artemis's chief scientist)

Network MIMO with Zero Force

Network Multiple-Input and Multiple-Output for Wireless Local Area Networks

Interference coordination and cancellation for 4G networks

Coordinated Multipoint: Concepts, Performance, and Field Trial Results

Rethinking Network MIMO: Cost of CSIT, Performance Analysis, and Architecture Comparisons

Interference alignment — Recent results and future directions

Network MIMO: Overcoming Intercell Interference in Indoor Wireless Systems

Optimal Multiuser Zero-Forcing with Per-Antenna Power Constraints for Network MIMO Coordination

The Practical Challenges of Interference Alignment

Overview of MIMO Systems

Optimal Beamforming

Performance Limits of a Cloud Radio

MIMO BROADCAST CHANNELS WITH BLOCK DIAGONALIZATION AND FINITE RATE FEEDBACK

We Recommend a Singular Value Decomposition

Approaching the Capacity of Wireless Networks through Distributed Interference Alignment

Zero-forcing methods for downlink spatial multiplexing in multiuser MIMO channels

Next Generation Wireless Communications Using Radio over Fiber

Interference Cancellation Using Space-Time Processing and Precoding Design

Precoding Techniques for Digital Communication Systems

Interference Alignment

Is pCell from Artemis really the Holy Grail of wireless networking

Short Courses

Press Coverage:

Wireless System Could Offer a Private Fast Lane

5G Service on your 4G Phone

Steve Perlman Artemis pCell pWave Antenna Launch

Is pCell the Holy Grail of Wireless Networking

Steve Perlman's New Startup Says it has the Answer to the Mobile Capacity Crunch

Artemis' pCell offers personal cell for every device, promises dramatic LTE capacity increase

Steve Perlman’s Artemis unveils his ‘breakthrough’ wireless broadband technology: pCell

This Man Says He Can Speed Cell Data 1,000-Fold. Will Carriers Listen?

Startup Claims Cellular Breakthrough

Artemis pCell: Your Own Private Cell Network?

Steve Perlman's Amazing Wireless Machine Is Finally Here

OnLive creator's next project could put an end to cellular reception woes

Perlman’s pCell: The super-fast future of wireless networking, or too good to be true?