BI Data Engineer - REMOTE

Rolling Meadows, IL
$145,000 - $160,000 / Year 
Mid level

Job Description

Motion has partnered with a premier client in filling a full-time, fully REMOTE employee position for a BI Data Engineer. This is a great opportunity to expand your career and work with a well-known company in the greater Chicago area. Do you get excited working on Azure cloud platforms specifically ingesting data using Azure Data Factory (ADF)? Are you experienced with Snowflake, Databricks, SQL, Python, within the enterprise data warehouse environments? This position may be for you.

Responsibilities: (Must have hands on knowledge with ALL of the following - 5+ years)

  • Work on cloud platform (join team of data engineers) - AZURE
  • Ingesting Data using Azure Data Factory (ADF) and transform the data
  • In depth Snowflake experience
  • Must have experience with Databricks
  • SQL is critical at a very high level of proficiency
  • Must have experience with Python programming languages
  • Knowledge of Dev-Ops processes including Git processes (including CI/CD) and Infrastructure as Code fundamentals - managing backlogs
  • Must have worked in an Agile environment (work in 3 week sprints)
  • Must have Data Warehouse environment experience at an enterprise level
  • Should have previous enterprise Data Modeling experience as well (will be loading data into the data warehouse)
  • Must have excellent communication and interpersonal skills
  • Insurance experience
  • Augmenting outbound data movement and integration

Formal Job Description

Position Summary: The Data Engineer will demonstrate broad and deep knowledge of ETL development in dimensional and relational databases in Azure and Snowflake Cloud environments. The Data Engineer will support the data analysts and data scientists on data initiatives and ensure optimal data delivery architecture is consistent throughout ongoing projects. You will engage in supporting the data needs of multiple teams, systems and products.
Essential Duties and Responsibilities
  • Build the infrastructure required for optimal ETL/ELT pipelines to ingest data from a wide variety of data sources using Microsoft Azure technologies such as Azure Data Factory and Databricks.
  • Construct and maintain of enterprise level integrations using the Snowflake platform, Azure Synapse, Azure SQL and SQL Server.
  • Design ETL pipelines and reusable components to implement specified business requirements Troubleshoot and optimize ETL code; interpret ETL logs, perform data validation, understand the benefits and drawbacks of parallelism, proper use of expressions, scoping of variables, commonly used transforms, event handlers and logging providers, understand and optimize the surrogate key generation and inconsistent data type handling
  • Create data tools for data analytics and data science team members to deliver actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Conduct code reviews, performance analysis and participate in technical design
  • Orchestrate large, complex data sets that meet functional/non-functional business requirements.
  • Seek out, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.
  • Partner with data and analytics talent to strive for greater functionality in our data systems.
  • A relevant technical BS Degree in Information Technology and 5 years of relevant professional experience implementing well-architected data pipelines that are dynamically scalable, highly available, fault-tolerant, and reliable for analytics and platform solutions
  • 3+ years of data engineering experience leveraging technologies such as Snowflake, Azure Data Factory, ADLS Gen 2, Logic Apps, Azure Functions, Databricks, Apache Spark, Scala, Synapse, SQL Server
  • Understanding the pros and cons, and best practices of implementing Data Lake, using Microsoft Azure Data Lake Storage
  • Experience structuring Data Lake for the reliability, security and performance
  • 5 years writing SQL, TSQL queries against any RDBMS with query optimization and performance tuning
  • Experience implementing ETL for Data Warehouse and Business intelligence solutions
  • Working experience with Python, and Power Shell Scripting
  • Skills to read and write effective, modular, dynamic, parameterized and robust code, establish and follow already established code standards, and ETL framework
  • Strong analytical, problem solving, and troubleshooting abilities, experience performing root cause analysis
  • Good understanding of unit testing, software change management, and software release management
  • Experience working within an agile team, In-depth knowledge of agile process and principles
  • Excellent communication skills
Ref #
23 days ago
Last updated 23 days ago

What's your favorite/least favorite part of Dofactory?
Stay Inspired!
Join other developers and designers who have already signed up for our mailing list.
Terms     Privacy     Cookies       Do Not Sell       Licensing      
© Data & Object Factory, LLC.
Made with    in Austin, Texas.      Vsn 43.0.0