![]() The arrival of the data file triggers Azure Data Factory to process the data and store it in the data lake in the core data zone.For information about securing access to Blob Storage, file shares, and other storage resources, see Security recommendations for Blob Storage and Planning for an Azure Files deployment. For example, several different factories can upload their operations data. Streaming data is captured and stored in Blob Storage by using the Capture feature of Azure Event Hubs. The data is uploaded by a batch uploader program or system. Data is uploaded from the data source to the data landing zone, either to Azure Blob storage or to a file share that's provided by Azure Files.The dataflow for the solution is shown in the following diagram: Solutions will vary depending on functional and security requirements.ĭownload a Visio file of this architecture. It's designed to control the interactions among the services in order to mitigate security threats. The following diagram shows the architecture of the data lakehouse solution. No endorsement by The Apache Software Foundation is implied by the use of these marks. We focus on the security considerations and key technical decisions.Īpache®, Apache Spark®, and the flame logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. This article describes the design process, principles, and technology choices for using Azure Synapse to build a secure data lakehouse solution. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |