In this post, I’ll briefly explain about the high performance computing industry as well as R Systems positioning in this segment.
Intro to HPC
High Performance Computing (HPC) most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business. (Source: http://insidehpc.com/hpc-basic-training/what-is-hpc)
Nowadays even small and medium enterprises (SMEs) are resorting to using supercomputers to conduct research and development to introduce new products. Supercomputers are used because there is a lot of data analysis and simulation studies involved which cannot be performed by a single desktop computer. The product developed may not necessarily be of a highly sophisticated nature but the data analysis requires high computing power as well as specialized software. Some of the products developed using supercomputers range from the simple stuff such as bicycle wheels and potato chips to complex products such as formula1 cars or new drug discovery in the pharma sector.
Rather than spend on their own supercomputer infrastructure, SMEs tend to approach an HPC service provider to carry out their R&D. The HPC service provider will assist the SME with the ideal configuration for the required task.
The benefits for SMEs of using HPC technology within their design and development processes can be huge, such as: enormous cost savings e.g. by reducing product failure early during design, development, and production; more simulations lead to higher quality products; and more computing power enable shorter time to market. Potentially, all this can lead to increased competitiveness and more innovation. (Source: http://www.theubercloud.com/cost).
In a nutshell, the fixed overheads are converted to variable costs and the SME does not have to worry about the low utilization of the HPC infrastructure.
Industry size
The total size of the the HPC industry (year 2012 data) is ~ $30 bn. The industry is divided into five basic segments 1. Servers 2. Storage 3. Services 4. Software 5. Networks.
Servers is the largest segment (~$10.3 bn). The major players in this segment are IBM, HP, DELL and Cray. Lenovo has also entered the business recently. Software is the second largest segment (~$6.6 bn). Some companies develop software specifically to run on HPC infrastructure. The list of some of these companies is provided on R Systems HPC website. (http://www.r-hpc.com/about-us/technology-partners/)
Services (~$3.3 bn) and Storage (~4.8 bn) are the other segments. R Systems is positioned mostly in the HPC Services segment where services such as dedicated hosting, shared systems, virtual private clusters and off site / remote facilities are offered.
Recently with the advances in telecommunications, HPC servcies are being offered via cloud computing, thus making supercomputing similar to a utility whereby the fees are according to usage.
According to Intersect360 research, the industry is forecasted to reach ~ $40 bn by 2017.
About R Systems HPC business
R Systems provides HPC services to the academic community as well as commercial organizations. The markets that it can serve are Bio-Tech Sciences, Electronic Design Automation, Entertainment, Financial Services, Health Care, Insurance, Motorsports, Oil & Gas, Wind Energy and Weather Forecasting.
The company owns two data centres in the U.S. that provide the HPC services. The data centre is located within the campus of University of Illinois. This allows the company to have a ready market as far as academic research is concerned.
The company has developed partnerships with various players in the industry. It partnered with Dell to offer HPC servcies over the cloud. It also has clients such as Trek and Cervelo (bicycle manufacturers) which use HPC to design bicycles. More details can be found at the following link (http://www.r-hpc.com/about-us/technology-partners)
The potential to create a moat in this business stems from:
1). Developing expertise on the clients R&D requirements and being able to configure the exact hardware and software setup which saves time and cost. Once a client is comfortable with a particular HPC provider, he will be faced with switching costs while migrating to another provider.
2). Sometimes research and development carried out by commercial enterprises involves proprietary data which requires secrecy and confidentiality to be maintained. One needs to earn the trust of the HPC provider before using it’s infrastructure for R&D activity. Building up such a relationship takes time.
For the last few years the HPC business of R Systems was loss making. It swung to a profit last year. In 2012 the company made a loss of Rs. 23 cr on revenues of Rs. 92 cr from its two U.S subsidiaries that provide HPC servcies. In 2013, the HPC business reported profits of Rs. 6.3 cr on revenues of Rs. 138 cr. This implies that a Rs. 46 cr increase in revenues led to nearly Rs. 29 cr improvement in profits indicating very high incremental margins. As I mentioned in my previous post, we need to dig deeper to understand the exact cost structure of the HPC business. Previously the loss making HPC business was depressing overall profitability. If the costs are predominantly fixed in nature, additional growth of the HPC business will lead to higher incremental margins. If anyone has expertise in this area, please feel free to share your views.
Tomorrow, I will post more on another subsidiary i.e. Computaris that R Systems recently acquired. Other senior members can put forward their own views and suggestions on how to analyze the company.