Confluent launches new technology initiatives for data silos
Open source data streaming company Confluent has launched its expansion of new connectors and on-demand scaling to break down data silos.
As part of the Confluent Q1 '22 Launch initiative, the company is set to utilise technology in new controls for efficiently scaling massive-throughput Apache Kafka clusters.
The company says it will help businesses with scalability and digitisation as it provides better, more efficient solutions.
New controls are also said to expand and shrink GBps+ cluster capacity in order to enhance elasticity for real-time business demands. In addition, Schema Linking will ensure compatible data streams across cloud and hybrid environments for customers worldwide.
With a vast majority of data currently housed in silos, the company stresses that new integrations can take months, and progress can be delayed when implementing new technology. Meeting the growing demand and maintaining efficient coordination has become significantly more challenging.
"Because of how we now consume data, companies and customers have a growing expectation of immediate, relevant experiences and communication," says Streaming Data Pipeline research manager, IDC, Amy Machado in IDC's Worldwide Continuous Analytics Software Forecast for 2021–2025.
"Real-time datastreams and processing will become the norm, not the exception."
Confluent chief product officer Ganesh Srinivasan says that the company has built an effective solution that will meet the demands required for progression in modern business infrastructure.
"The real-time operations and experiences that set organisations apart in today's economy require pervasive data in motion," he says.
"In an effort to help any organisation set their data in motion, we've built the easiest way to connect data streams across critical business applications and systems, ensure they can scale quickly to meet immediate business needs, and maintain trust in their data quality on a global scale."
There are now additions to the over 50 expert-built, fully managed connectors that will help quickly modernise applications with real-time data pipelines.
The newest connectors in the Confluent portfolio include Azure Synapse Analytics, Amazon DynamoDB, Databricks Delta Lake, Google BigTable, and Redis. The company says this will develop increased coverage of popular data sources and destinations.
Confluent says global data quality controls are critical for maintaining a highly compatible Kafka deployment fit for a long term period, and by utilising new technology such as the innovative Schema Linking they can help further progress expansions and partnerships as well.
Confluent is headquartered in the USA and has multiple offices across the Asia Pacific region, with the new technology set to advance all areas of business concerned.