A Data Lake on GCP with Data Fusion
Posted: Apr 14, 2021
With an ever-increasing number of organisations migrating their data platforms en route for the cloud, there is also a demand in favor of cloud technologies that permit utilizing the on hand skill sets within the organization at the same time as also making sure successful migration.
ETL developers time and again form a substantial part of data teams in lots of organisations. These developers are knowledgeable within the use of GUI based ETL tools over and above complex SQL and also have or are beginning to build up programming skills in languages like Python.
There are some broad requirements for the GCP data orchestration:
- Control accessible ETL skill set existing in the organisation
- Intake from mix sources such as on-premise SQL Server sources
- Support multifaceted dependency managing in job orchestration, not just for the intake jobs, but also conventional pre and post intake tasks.
- Allow data discoverability at the same time as still ensuring suitable access controls
Architecture intended for the GCP data orchestration to meet above prerequisites is shown underneath. The key GCP services mixed up within this architecture take account of services for data integration, storage, orchestration and data finding.
GCP makes available an all-inclusive set of data and analytics services. There are manifold service options available for each capacity and the choice of service necessitates architects and designers to think about a few aspects that apply to their exclusive scenarios.
There are manifold ways to design the architecture by way of diverse service combinations and what is explained here is simply one of the ways. Depending on top of your unique prerequisites, priorities and considerations, there are other techniques to architect a data lake on top of GCP.
Data Integration Service
The decision tree underneath details the considerations mixed up in choosing a data integration service on top of GCP.
For the use case, data had to be ingested from a diversity of data sources together with on-premise flat files and RDBMS, for instance SQL Server and Oracle, over and above 3rd party data sources such as APIs. The multiplicity of source systems was anticipated to develop in the future. Also, the organisation this was being premeditated for had a well-built presence of ETL skills within their data and analytics team.
Google Cloud Data Fusion
It is a completely managed cloud service from Google that makes available a smooth graphical user interface designed for building data pipelines, starting from data intake from all kinds of sources en route for applying data transformations headed for data load into warehousing solutions. It really makes simpler individual data engineering tasks over and above allows creating reusable data pipelines across the entire business. Based on top of the open-source CDAP framework, Cloud Data Fusion takes its usability on the way to a novel level as a result of being fully integrated and supported by Google as a component of the Google Cloud Platform GCP.
The idea of a digital enterprise has evolved considerably over the past few years. First employed to describe any business that takes benefit of digital technology, it is at the moment also by and large associated with the use of technologies meant for automated data compilation, and data-driven decision making.On the whole, Cloud Data Fusion allows users to speedily build and deal with the data pipelines. The most excellent part is that rather than having to write several codes to fix a data source on the way to a warehouse, data specialists can draw on a handy graphical interface to build needed pipelines in an efficient manner. In this way, it all facilitates them focus on top of actual data analytics and deriving insights for improved operational effectiveness.
Foghorn Consulting solves complex business needs with cloud consulting and partnering with renowned cloud platforms to create innovative and secure products for you.