Senior Data Engineer - Snowflake

Posted 7 months ago
Job closed
Tuple

Senior Data Engineer - Snowflake

Our Client - Hospital & Health Care company

  • Richardson, TX
$65.99 - $80.99/hour
Exact compensation may vary based on skills, experience, and location.
40 hrs/wk
Contract to Hire (w2)
Remote work partially (40%)
Travel not required
Start date
November 22, 2024
End date
November 22, 2025
Superpower
Technology
Capabilities
Data Science and Machine Learning
Software Development
Preferred skills
Apache Airflow
SQL Tuning
Data Warehousing
Data Transformation
Data Profiling
Data Modeling
Data Extraction
Data Engineering
Data Analysis
Analytical Skills
Data Manipulation
PostgreSQL
Development Environment
Informatica
Stored Procedure
Scrum (Software Development)
Agile Methodology
Python (Programming Language)
Data Quality
Github
PL/SQL
Mathematics
Design Specifications
Azure DevOps
Data Infrastructure
SnapLogic
Snowflake (Data Warehouse)
Extract Transform Load (ETL)
Data Warehousing And Business Intelligence (DWBI)
Data Ingestion
Preferred industry experience
Hospital & Health Care
Experience level
5 - 8 years of experience

Job description

Our client is the global leader in commercial real estate services and investments. With services, insights and data that span every dimension of the industry, we create solutions for clients of every size, in every sector and across every geography. Their mission is to realize the potential of their clients, professionals and partners by building the real estate solutions of the future.

We are seeking a Senior Data Engineer - Snowflake on a contract basis to help support our Customer’s business needs. This role is hybrid (3 days onsite) in Richardson, TX.

Ideal candidate would have 8+ years (Senior Data Engineer/Data Architect) of experience building enterprise-level data warehouse solutions centered around Snowflake, with strong data modeling skills, proficiency in Snaplogic, understanding of CI/CD practices, and familiarity with Agile methodologies.

Responsibilities:

  • Plan & analyze, develops, maintains, and enhances client systems as well as supports systems of moderate to high complexity.
  • Participates in the design, specification, implementation, and maintenance of systems.
  • Designs, codes, tests, and documents software programs of moderate complexity as per the requirement specifications.
  • Design, develop, and maintain scalable data pipelines using Snowflake, dbt, Snaplogic and ETL tools.
  • Participates in design reviews and technical briefings for specific applications.
  • Integrate data from various sources, ensuring consistency, accuracy, and reliability.
  • Develop and manage ETL/ELT processes to support data warehousing and analytics.
  • Assists in preparation of requirement specifications, Analyzing the data, design and develop data driven applications including documenting and revising user procedures and/or manuals.
  • Involved with resolution of Medium to severe complexity software development issues that may arise in a production environment.
  • Utilize Python for data manipulation, automation, and integration tasks.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL Server, PostgreSQL, SSIS, T-SQL, PL/SQL
  • Work with stakeholders including the Product, Data, Design, Frontend and Backend teams to assist with data-related technical issues and support their data infrastructure needs
  • Write complex SQL, T-SQL, PL/SQL queries, stored procedures, functions, cursors in SQL Server and PostgreSQL. Peer review other team members code
  • Analyze the long running queries/functions/procures, design and develop performance optimization strategy.
  • Create and manage SSIS packages and/or Informatica to perform day to day ETL activities. Use variety of strategies for complex data transformations using an ETL tool
  • Perform DBA activities like maintaining the systems health and performance tuning, manage database access, deployments to higher environments, on-call support, shell scripting and python scripting is a plus
  • Participate in employing the Continuous Deliver and Continuous Deployment (CI/CD) tools for optimal productivity.
  • Collaborate with scrum team members during daily standup and actively engage in sprint refinement, planning, review and retrospective.
  • Analyzes, reviews, and alters program to increase operating efficiency or adapt to new requirements.
  • Writes documentation to describe program development, logic, coding, and corrections.
Qualifications:
  • Bachelor's degree (BA/BS) in a related field such as information systems, mathematics, or computer science.
  • 8+ years of relevant work experience required.
  • Expertise in Data Extraction, Transformation, Loading, Data Analysis, Data Profiling, and SQL Tuning.
  • Expertise in Relational & Dimensional Databases in engines like SQL Server, Postgres, Oracle.
  • Strong experience in designing and developing enterprise scale data warehouse systems using Snowflake.
  • Strong expertise in designing and developing reusable and scalable Data products with data quality scores and integrity checks.
  • Strong expertise in developing end to end complex data workflows using Data ingestion tools such as Snaplogic, ADF, Matallion etc.
  • Experience with cloud platforms AWS / Azure cloud technologies, Agile methodologies and DevOps is a big plus.
  • Experience in architecting cloud native solutions across multiple B2B and B2B2C data domains.
  • Experience in architecture of modern APIs for the secure sharing of data across internal application components as well as external technology partners.
  • Experience in Data orchestration tools like Apache Airflow, Chronos with Mesos cluster etc.
  • Expertise in designing and developing data transformation models in DBT.
  • Comparing and analyzing provided statistical information to identify patterns, relationships, and problems; and using this information to design conceptual and logical data models and flowcharts to present to management.
  • Experience with developing CICD pipelines in Jenkins or Azure DevOps.
  • Knowledge of Python for data manipulation and automation.
  • Knowledge of data governance frameworks and best practices.
  • Knowledge in integrating with source code versioning tools like Git Hub.
  • Excellent written and verbal communication skills. Strong organizational and analytical skills.

Notes:

  • 2 interview rounds
  • Opportunity for conversion to full-time based on performance

We offer a competitive salary range for this position. Most candidates who join our team are hired at the median of this range, ensuring fair and equitable compensation based on experience and qualifications.

Perks are available through our 3rd Party Employer of Record (Available upon completion of waiting period for eligible engagements)
Benefits: Medical, Dental, and 401k (no match)

Please note: In order to create a safe, productive work environment, our client is requiring all contractors who plan to be onsite to be fully vaccinated according to the CDC guidelines. Prior to coming into our offices, contractors will be required to attest that they are fully vaccinated.

An Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, or protected veteran status and will not be discriminated against on the basis of disability.

All applicants applying for U.S. job openings must be legally authorized to work in the United States and are required to have U.S. residency at the time of application.

If you are a person with a disability needing assistance with the application, or at any point in the hiring process, please contact us at support@themomproject.com.