Senior Data Engineer

To Apply for this Job Click Here

The Data Services team with our Columbus, Ohio client is looking for a Senior Data Engineer on a 6 month contract to hire basis. They are looking for someone with Python, SQL, Spark, and DataBricgks. Responsibilities include building pipelines, data sets and schemas, and data optimization.

 

**Would prefer local to Columbus area, but open to remote as long as ET/CT time zones. 

 

If you are a “Data Person,” who knows how to build data pipelines, aggregate data from multiple sources into a common data store and have experience with industry-leading cloud data platforms and scripting, we’d love to talk with you.

 

Responsibilities:

  • Help support the IT organization’s data strategy in support of business customers, as well as internal initiatives.
  • Work closely with IT Data Architect, Integration Engineers, Business Operations customers, and Services Partners to help advance client data and analytics ambitions.
  • This role will serve as a hands-on contributor and be a thought leader in the areas of data engineering, cloud data strategy, Business Intelligence, Data Modeling and ETL/ELT.
  • Work with the IT Data Engineering Team and assist in developing the next generation data and analytics infrastructure.
  • Must have strong SQL modeling skills (dbt is a plus)
  • Write high quality SQL code to retrieve and analyze data from database tables (primarily Databricks)
  • Develop high quality SQL models for ad-hoc requests, as well as ongoing reporting / dashboarding.
  • Work directly with business stakeholders to translate between data and business needs.
  • Continually improve SQL models through automating or simplifying self-service support for datasets

 

Qualifications:

  • Bachelors degree in Computer Science, Information Systems, Engineering, Data Science, or other similarly technical related field Mathematics, or specialized training/certification. Or equivalent work experience.
  • 1- 3-year minimum professional experience in cloud data engineering and science and associated technologies utilizing cloud data platforms such as Databricks, AWS RedShift, and Snowflake.
  • 1 – 3-year minimum experience with advanced SQL data modeling and query optimization
  • Proficiency in using visualization tools such as Tableau, Domo, or Power BI.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong verbal, written & presentation skills with the ability to effectively communicate complex technical information to personnel at all levels of the organization.

 

Nice to Have:

  • Specific experience with Data Warehouse/Data Lake configuration and development using Databricks platform.
  • Experience with Tableau / Sigma Computing
  • Experience operating in an Agile development environment.
  • Familiarity with usage of Agile tools (JIRA / Confluence)
  • Understanding of CI/CD deployment models and release strategy as well as SCM tools (Git preferred) and code management best practices.
  • Experience in AWS environment.
  • Experience with cloud ELT platforms such as AWS Glue, Talend Stitch, or FiveTran

To Apply for this Job Click Here