Information and communication technology specialist Fujitsu announced the development of software offerings that help customers utilize big data, which consist of two product families: a line of parallel distributed processing and complex event processing products, standard technologies in big data applications, and a line of products to employ big data in a range of uses.
The offerings include two new products: Interstage Big Data Complex Event Processing Server V1, which is a complex event processing solution, and Interstage eXtreme Transaction Processing Server V1, an extreme transaction solution. Fujitsu is also releasing an enhanced version of its Symfoware Server V11 database solution. Currently available in Japan, Fujitsu said it plans to gradually roll out the products worldwide.
The Interstage Big Data Parallel Processing Server is a parallel distributed processing software platform that enables improved data reliability by combining the Apache Hadoop open source big data parallel distributed processing platform with Fujitsu's own proprietary distributed file system. At the same time, by sharing files between servers, the software eliminates the need for transferring data, which can improve processing performance.
The Interstage eXtreme Transaction Processing Server V1 enables high-speed processing through in-memory distributed cache technology, while also safeguarding data using a redundant configuration of up to three layers. In addition, Symfoware Server V11 stores large volumes of data using parallel I/O technology optimized for solid state drive (SSD) memory. The database also employs high-compression technology to compress data to 1/5 its original size.
The products use technologies such as Fujitsu's Primecluster distributed file system, Interstage Shunsaku Data Manager, a high-speed filter technology for scoping events, and Primesoft Server, a high-speed in-memory data management technology. Delivered as on-premise software, these new products can also be combined with products from other vendors, including open-source software, to build ecosystems that support customers in using big data.
An executive brief published by IT research firm IDC in January cites Big Data as one of the major obstacles organizations face when committing resources and applications to cloud computing. At stake is the issue of validating the integrity and authenticity of data within a cloud environment. Titled Data Integrity in the Big Data Digital Age , the research paper ties the current Big Data phenomenon with cloud computing, making a case for how organizations must grapple with analyzing and basing decisions on petabytes of unstructured data, yet rarely questioning the authenticity of this data.
This article was originally published on 04-23-2012