Senior Data Engineer - DataBricks

Full Time
Illinois
Posted
Job description
About Grainger:

Grainger is a leading broad line distributor with operations primarily in North America, Japan and the United Kingdom. We achieve our purpose, We Keep the World Working®, by serving more than 4.5 million customers with a wide range of products that keep their operations running and their people safe. Grainger also delivers services and solutions, such as technical support and inventory management, to save customers time and money.

We're looking for passionate people who can move our company forward. As one of the 100 Best Companies to Work For, we have a welcoming workplace where you can build a career for yourself while fulfilling our purpose to keep the world working. We embrace new ways of thinking and recognize everyone you. Find your way with Grainger today.


Position Details:

The Data Engineering team at Grainger is focused on transforming data from Grainger’s key domains into reliable and real-time analytics products that address key needs. You will be focused on building and operating data pipelines that power analytics ranging from key financial reports to production models that define Grainger.com’s user experience. You will play an important part in defining the strategy of the team, evaluating and integrating data patterns and technologies, and building data products alongside domain experts. You are a thoughtful observer who enjoys investigating business problems and building data solutions that address them. You are an technical teacher that can guide teams to adopt the capabilities and products you build.


You Will:
  • Design efficient and scalable data processing systems and pipelines on Databricks, Airflow, APIs, and AWS Services.
  • Create technical solutions that solve business problems and are well engineered, operable, maintainable, and delivered.
  • Design and implement tools to detect data anomalies. Ensure that data is accurate, complete, and across all platforms.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Provide technical guidance to help data users adopt new data pipelines and tools.
  • Develop scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Understand trends and latest technologies. Evaluate the performance and applicability of potential tools for our requirements.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Design, and maintain efficient and scalable data processing systems and pipelines on Databricks, Airflow, APIs, and AWS Services.
  • Create technical solutions that solve business problems and are well engineered, operable, maintainable, and delivered on schedule.
  • Design and implement tools to detect data anomalies. Ensure that data is accurate, complete, and across all platforms.
  • Develop data models and mappings and build new data assets required by users. Perform exploratory data analysis on existing products and datasets.
  • Provide technical guidance to help data users adopt new data pipelines and tools.
  • Develop scalable and re-usable frameworks for ingestion and transformation of large datasets.
  • Understand trends and emerging technologies. Evaluate the performance and applicability of potential tools for our requirements.
  • Work within an Agile delivery / DevOps methodology to deliver product increments in iterative sprints.
  • Work with our AI, Platform, and Business Analytics teams to build useful pipelines and data assets.
You Have:
  • Experience in batch and streaming ETL using Spark, Python, Scala on Databricks for Data Engineering or Machine Learning workloads.
  • Familiarity with AWS Services not limited to Glue, Athena, Lambda, S3, and DynamoDB
  • Experience prepping structured and unstructured data for data science models
  • Demonstrated experience implementing data management life cycle, using data quality functions like standardization, transformation, rationalization, linking and matching.
  • Familiarity with containerization and orchestration technologies (Docker, Kubernetes) and experience with shell scripting in Bash, Unix or windows shell is preferable.
Rewards and Benefits:

With benefits starting day one, Grainger is committed to your safety, health and wellbeing. Our programs provide choice and flexibility to meet our team members' individual needs. Check out some of the rewards available to you at Grainger

  • Medical, dental, vision, and life insurance plans
  • Paid time off (PTO) and 6 company holidays per year
  • Automatic 6% 401(k) company contribution each pay period
  • Employee discounts, parental leave, 3:1 match on donations and tuition reimbursement
  • A comprehensive set of emotional, financial, physical and social wellbeing programs
DEI Statement

We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal opportunity workplace.

jjbodyshop.com is the go-to platform for job seekers looking for the best job postings from around the web. With a focus on quality, the platform guarantees that all job postings are from reliable sources and are up-to-date. It also offers a variety of tools to help users find the perfect job for them, such as searching by location and filtering by industry. Furthermore, jjbodyshop.com provides helpful resources like resume tips and career advice to give job seekers an edge in their search. With its commitment to quality and user-friendliness, jjbodyshop.com is the ideal place to find your next job.

Intrested in this job?

Related Jobs

All Related Listed jobs