Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб Creating a Metadata Driven Processing Framework Using Azure Integration Pipelines|Data Factory|SQL в хорошем качестве

Creating a Metadata Driven Processing Framework Using Azure Integration Pipelines|Data Factory|SQL 3 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса savevideohd.ru



Creating a Metadata Driven Processing Framework Using Azure Integration Pipelines|Data Factory|SQL

Title: Creating a Metadata Driven Processing Framework Using Azure Integration Pipelines Summary: Dynamic Pipelines + Metadata + Functions = An Azure based processing framework with capabilities that take platform orchestration to the next level. In this session you’ll find out why and how to deliver such a solution using this open-source code project. Abstract: Azure Data Factory (ADF) is the undisputed PaaS resource within the Microsoft Cloud for orchestrating data workloads. With a 100+ Linked Service connections, a flexible array of both control flow and data flow Activities there isn't much Data Factory can’t do as wrapper over our data platform solutions. That said, the service may still require the support of other Azure resources for the purposes of logging, monitoring, compute and storage. In this session we’ll will focus on exactly that point and explore the problem faced when structuring many integration pipelines. Either deployed via ADF or even Azure Synapse Analytics. Once done, we’ll look at one possible solution to this problem by coupling our orchestration resource with a SQL Database and Azure Functions to create a dynamic, flexible, metadata driven processing framework that complements our existing solution pipelines. Furthermore, we will explore how to bootstrap multiple orchestrators (across tenants if needed), design for cost with nearly free Consumption Plans and deliver an operational abstraction over all our processing pipelines. Finally, we'll explore delivering this framework within an enterprise and consider an architect’s perspective on a wider platform of ingestion/transformation workloads with multiple batches and execution stages. Speaker: Paul Andrew BIO: Paul Andrew is Group Manager & Analytics Architect specializing in big data solutions on the Microsoft Azure cloud platform. Data engineering competencies include Azure Synapse Analytics, Data Factory, Data Lake, Databricks, Stream Analytics, Event Hub, IoT Hub, Functions, Automation, Logic Apps and of course the complete SQL Server business intelligence stack. Many years’ experience working within healthcare, retail and gaming verticals delivering analytics using industry leading methods and technical design patterns. STEM ambassador and very active member of the data platform community delivering training and technical sessions at conferences both nationally and internationally. Father, husband, swimmer, cyclist, runner, blood donor, geek, Lego and Star Wars fan! Speaker Blog: https://mrpaulandrew.com/ Follow Us On Social Media AT #clouddatadriven Join Our Social Media User Group LinkedIn 👉 https://bit.ly/3676SV3 Facebook 👉 https://bit.ly/2XUggXV Eventbrite 👉 https://bit.ly/3sHxO7K YouTube 👉 https://bit.ly/38SRFsA Meetup 👉 https://bit.ly/3a0v5gT Twitter 👉 https://bit.ly/3akl1iW

Comments