As a manufacturer of dry adhesive material used to handle silicon wafers during production, my team and I pay close attention to what’s happening in the semiconductor industry. After attending Semicon West earlier this year, and after speaking with many people in the industry on a regular basis over the years and keeping up with the industry through the trade media, it appears as if technological changes and market conditions have produced several areas in which semiconductor manufacturers and their partners should focus their energies over the next several years: Internet of Things (IoT), Artificial Intelligence and 5G Networks.
IoT is a term that means different things to different people. For the sake of clarity, I’ll define it as internet connectivity of electronics, appliances and other devices or equipment to enable quick and efficient data collection and data processing in fields ranging from manufacturing to healthcare to transportation to insurance. The consensus among experts is that IoT is the most common application for semiconductors or will soon become the purpose for which the largest percentage of semiconductors are used.
Based on what I saw at Semicon West, semiconductors are becoming smaller, more flexible and better-packaged, thanks largely to advances in fluorochemistry.
Nevertheless, there are warning signs that Moore’s Law (the number of transistors – and by extension the performance -- in a dense integrated circuit doubles about every two years) may be relegated to history as peak performance of semiconductors may be starting to level off following a roughly 30-year period of computers becoming smaller and faster.
For you history buffs, Moore’s Law is named for an observation made by Gordon Moore in the 1960s. Moore, a Cal Tech Ph.D. who’s now 90 years young, is the co-founder of Fairchild Semiconductor and served as CEO of Intel.
Another factor that should be of concern is the struggle that many, if not most, semiconductor manufacturers have experienced with respect to generating profits in recent years. Because the market considers semiconductors to be a commodity, margins can be narrow, and profits made over a multi-year period on one product or set of products can be offset by losses on future products that fail to gain traction.
As companies have struggled with profitability from chip sales, they have tended to expand their capabilities to include systems integration, software and related services. As a result, many companies have more software engineers than hardware engineers, but don’t (or can’t) charge their customers for the value that software engineers add to the transaction. This phenomenon amounts to providing more value and getting less compensation than the services are worth, and that’s an unsustainable business model.
My sense is that, given the brilliant technological and business minds throughout the industry, companies will figure it out to their competitive advantage, and business customers and consumers alike will benefit from the resulting changes in a manner to be determined, given the number of variables in play.
AI – the process by which a computer’s algorithms mimic a human’s ability to learn, reason and make decisions – requires a tremendous amount of computing power. That’s especially true for deep learning, the use of computers programmed with human traits such as reasoning, vision, speech and hearing to process large amounts of data.
In my experience, the complexity of AI chips appears to be a significant driver of manufacturing innovations. Specifically, AI chips are larger than the chips required by PCs and smartphones, which has led many semiconductor manufacturers to make capital investments in systems and equipment capable of producing AI chips for applications such as large (hyperscale) servers, autonomous vehicles, robotics, drones, IoT devices such as security cameras and retail.
Specifically, retailers are using AI to predict sales and reduce inventory. In a similar vein, electric utilities are using AI to improve their predictive maintenance capabilities, automate fault prediction and reduce overhead, passing the savings along to customers.
AI’s potential for use in training and decision-making also has potential in finance, healthcare (increasing nurses’ productivity to help customize patient treatment), energy, education (virtual learning, automated grading) and cybersecurity and appears to be providing opportunities to smaller companies specializing in the development of AI applications without having to invest in high performance computing hardware.
Clearly, AI advancements have led to innovations in manufacturing which has allowed the semiconductor industry to thrive globally as plants have upgraded to meet the increased memory requirements that customers (i.e., manufacturers of devices that use semiconductors as opposed to end users) have demanded.
Beyond keeping up with technology advancements and market demand, semiconductor manufacturers face another potential challenge: competition from their customers. That’s because the emergence of AI has prompted many large companies (“household names,” for the most part, some of the FAANG companies among them) to decrease their reliance on suppliers and develop their own chips to streamline the manufacturing process, reduce overhead costs, manage their inventory most effectively and maintain a higher level of confidentiality with regard to product development.
Additionally, those large companies have been competing for AI talent with smaller manufacturers, and winning that battle given their ability to offer better compensation packages. They have also been acquiring startups developing on technologies that appear to have market potential.
Just when you may have thought that the development of AI technologies was starting to plateau, none other than Bill Gates invested in a startup that is developing an optical microchip which is faster than semiconductors and uses less power.
Taking a broad view of AI, the semiconductor industry appears to be most enthusiastic about adopting the technology because it presents opportunities for growth and provides a means to increase efficiencies in production and management alike.
Such disruption, however, may mean that semiconductor manufacturers will need to reconfigure their workforces in ways that were unimaginable several years ago as there is likely to be increased emphasis on product design and engineering and increased automation of the manufacturing and quality functions.
It could also mean an increased number of joint ventures or strategic partnerships to leverage the increasingly specialized expertise that manufacturers need to remain competitive.
Fifth-generation wireless (5G) is the latest version of cellular communications technology. It has been engineered to increase the speed and responsiveness of wireless networks and is being promoted as an enabler for numerous applications, including mobile phones, automotive, virtual reality and IoT.
In the wireless communications arena, network operators in the United States, China, Japan and South Korea have been spearheading the initial development of 5G technology. Over the next decade they will spend billions of dollars on R&D despite a consensus among industry observers and engineering experts alike that ROI is unclear.
As for the 5GE technology launched almost a year ago by one of the major U.S. wireless carriers, it’s more of a marketing tactic than a reality and has even caused confusion as U.S. carriers have launched real 5G networks on a limited basis, mostly in the West, South and Midwest. By the time you read this blog post, 5G could already be in your area or carriers could have such plans underway, given how fast implementation of the technology is moving.
If you consider an adoption curve that ranges from Trials to Early Adopters to early Majority to Mainstream Adoption to Late Majority to Laggards, we’re in the Early Adopters stage, approaching the cusp of Early Majority.
5G’s promise of improved communication speed will have considerable effects on semiconductor architectures, processor and memory choices, I/O speeds, power capacities and battery sizes.
Generally, for a new technology to be successful, it should provide an improvement of 10x. In this case, such gains can be in overall performance, power reduction, reduced cost, smaller chip size or a combination of those factors.
Some industry observers predict a 1,000x performance improvement over 4G by improving bandwidth, latency and coverage. My sense is that performance will improve significantly, but by a smaller multiplier. As the old saying goes, “Shoot for the moon, land on a planet.”
Edge computing – the process of performing computations and storing data closer to the location where it is needed instead of in a remote data center -- can improve response times and save bandwidth. It can also reduce the amount of power required to transmit data over greater distances, as well as create opportunities for more complex processing and communication than 4G currently provides.
5G can also help manufacturers of virtual reality (VR) and augmented reality (AR) games eliminate the “motion sickness” that results from slow downloads and that has been limiting adoption of the technologies in commercial environments.
When it comes to driverless vehicles, which are really transmitters on wheels, 5G can integrate multiple technologies to improve response times (latency) and power consumption rates, opening the door to Vehicle to Everything (V2X) communication. That capability can improve vehicle safety significantly by being able to detect safety hazards earlier than radar (Radio Detection and Ranging) and remote sensing (LIDAR – Light Detection and Ranging)
As cars evolve into communication hubs, they have the potential to collect and apply data – stored in the cloud -- in ways that can help to reduce traffic congestion and improve passenger safety through route planning based on traffic conditions and hazards, as well as communicating with other vehicles to facilitate lane changes.
Although many 5G applications are in the prototype stage, they can become marketable after 5G networks become widely available over the next several years. While there are many variables related to the effects of 5G technology on its many potential applications, mobile phones are going to be the first beneficiaries because their widespread use enables consumers to pay for infrastructure development to an extent that vehicle users and the government organizations responsible for maintaining infrastructure, for examples, are unable or unwilling to do.
Eventually, 5G will be as much as part of our lives as microwave ovens, backup cameras in our vehicles and voice assistants in our homes and offices have become. As for the timetable, it depends upon whom you believe. My sense is that usage will be a gradual rollout over the next several years as companies in a variety of sectors find applications for the technology. Those companies must also make the business case to top and senior management, investors and others that the world is moving in the direction of 5G, and that their respective organizations need to get on-board.
Looking ahead, it will be exciting to continue contributing to the development and refinement of technologies that have the potential to transform how business gets done and how we live our lives. Anticipating those opportunities and being prepared to meet them provide motivation every day to innovate for the mutual benefit of customers and society.
What are your thoughts on the futures of IoT, AI and 5G as transformative technologies?