Menu

Big Data Engineer

Columbus

Opportunity Details

Full TimeDatabase Engineering

Columbus
Apply For This Position
Copy URL
Copy URL to clipboard

Big Data Engineer

Improving is looking for a Big Data Engineer to play a key role in building industry leading Customer Information Analytics Platform. Are you passionate about Big Data and highly scalable data platforms? Do you enjoy building end to end Analytics solutions to help drive business decisions? And if you have experience in building and maintaining highly scalable data warehouses and data pipelines with high transaction volumes then we need you!!!

 

The full stack Data Engineer will design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for analytics and deep learning. Implement data ingestion routines both real time and batch using best practices in data modeling, ETL/ELT processes leveraging company technologies and Big data tools. Provide on-line reporting and analysis using business intelligence tools and a logical abstraction layer against large, multi-dimensional datasets and multiple sources. Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture. Produce comprehensive, usable dataset documentation and metadata. Provides input and recommendations on technical issues to the project manager.

 

Basic Qualifications:

  • 3+ years of experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
  • 2+ years of experience data modeling concepts
  • 1+ years of Python and/or Java development experience
  • 1+ years experience in Big Data stack environments (EMR, Hadoop, MapReduce, Hive)
  • Experience in writing MapReduce and/or Spark jobs
  • Demonstrated strength in architecting data warehouse solutions and integrating technical components
  • Good analytical skills with excellent knowledge of SQL.
  • 4+ years of work experience with very large data warehousing environment
  • Excellent communication skills, both written and verbal

 

Preferred Qualifications:

  • Experience in writing MapReduce and/or Spark jobs
  • Demonstrated strength in architecting data warehouse solutions and integrating technical components
  • Good analytical skills with excellent knowledge of SQL.
  • Experience in gathering requirements and formulating business metrics for reporting.
  • Experience using software version control tools (Git, Jenkins, Apache Subversion)
  • Experience with cloud or on-premise middleware and other enterprise integration technologies
  • Meets/exceeds company’s leadership principles requirements for this role
  • Meets/exceeds company’s functional/technical depth and complexity for this role

Must Have Qualifications:

      • Data Modeling (star, snowflake, dimensional modeling and 3NF)
      • ETL design using spark (scala or python) and SQL
        • No SQL server or Informatica only ETL work. These candidates don’t typically do well in our environment because we don’t use a graphical UI for ETL.
      • Hadoop/EMR
      • Redshift
      • At least one Programming/scripting language (python, ruby, java, perl)
    Think this opportunity is a good fit for you?
    Apply For This Position

    “The people I am surrounded by are simply the best, and I am proud to call them friends.”

    ― Lori Davis, Technical Recruiter

    Top