Researchers have developed a mathematical formula that, computer simulations suggest, could help 5G and other wireless networks select and share communications frequencies approximately 5,000 times more efficiently than trial-and-error methods. The formula is a form of machine learning that selects a wireless frequency range, known as a channel, based on prior experience in a specific network environment. The formula could be programmed into software on transmitters in many types of real-world networks.

The formula is a way to help meet growing demand for wireless systems, including 5G, through the sharing of frequency ranges (bands) that are unlicensed. WiFi, for example, uses unlicensed bands — those not assigned by the Federal Communications Commission to specific users.

Wi-Fi competes with cellular systems for specific frequencies, or subchannels. What makes this scenario challenging is that these cellular systems are raising their data-transmission rates by using a method called License Assisted Access (LAA), which combines both unlicensed and licensed bands. This work explores the use of machine learning in making decisions about the frequency channel on which to transmit.

The formula enables transmitters to rapidly select the best subchannels for successful and simultaneous operation of WiFi and LAA networks in unlicensed bands. The transmitters each learn to maximize the total network data rate without communicating with each other. The scheme rapidly achieves overall performance that is close to the result based on trial-and-error channel searches.

The research takes into account multiple network “layers” — the physical equipment and the channel access rules between base stations and receivers.

The formula is a “Q-learning” technique, meaning it maps environmental conditions — such as the types of networks and numbers of transmitters and channels present — onto actions that maximize a value, known as Q, that returns the best reward. By interacting with the environment and trying different actions, the algorithm learns which channel provides the best outcome. Each transmitter learns to select the channel that yields the best data rate under specific environmental conditions. If both networks select channels appropriately, the efficiency of the combined overall network environment improves.

The method boosts data rates in two ways. Specifically, if a transmitter selects a channel that is not occupied, then the probability of a successful transmission rises, leading to a higher data rate. And if a transmitter selects a channel such that interference is minimized, then the signal is stronger, leading to a higher received data rate.

In the computer simulations, the optimum allocation method assigns channels to transmitters by searching all possible combinations to find a way to maximize the total network data rate. The formula produces results that are close to the optimum one but in a much simpler process. The study found that an effort to identify the best solution would require about 45,600 trials, whereas the formula could select a similar solution by trying only 10 channels — just 0.02 percent of the effort.

The work addressed indoor scenarios such as a building with multiple WiFi access points and cellphone operations in unlicensed bands. Researchers now plan to model the method in larger-scale outdoor scenarios and conduct physical experiments to demonstrate the effect.

For more information, contact Laura Ost at This email address is being protected from spambots. You need JavaScript enabled to view it.; 303-497-4880.