Welcome to Store4you36.foodninfo.com We provide job seekers with information gathered from various publicly available job posting websites, including but not limited to Google, Indeed, LinkedIn, and other well-known job platforms. Our mission is to help individuals find employment opportunities by offering up-to-date job listings and career-related resources. We do not charge any fees for accessing or using our website, and all job information is provided free of charge.
Store4you36.foodninfo.com does not directly offer, manage, or engage in the hiring process for any of the job listings featured on our website. All listings are sourced from third-party job posting platforms such as Indeed, LinkedIn, and other recognized job websites.
By using our website, you acknowledge and accept the above terms and conditions. Thank you for visiting Store4you36.foodninfo.com, and we wish you success in your job search.
About Turing.com:
Turing’s mission is to unleash the world’s untapped human potential. We use AI to source, evaluate, hire, onboard, and manage engineers remotely (including the HR and compliance aspects) in a bigger platform that we call the “Talent Cloud”.
We recently achieved unicorn status with a valuation of $1.1B, after raising over $140M in financing over four rounds of funding. 900+ companies including companies like Johnson & Johnson, Pepsi, Dell, Disney +, and Coinbase have hired Turing developers.
About the role:
We, at Turing, are looking for talented data engineers who can build and maintain scalable data pipelines and data architecture for streamlined data processing and reporting infrastructure that powers company-wide insights and technical intelligence. Get an opportunity to work with the leading U.S. companies.
Job responsibilities -
- Architect and implement database and data pipeline that meets the business demands
- Extract, transform, and load data from distributed data sources
- Develop and maintain operationally efficient analytic solutions
- Utilize tools such as AWS, Snowflake, Spark, and Kafka in order to develop highly scalable data pipelines
- Work closely with cross-functional teams and translate complex business requirements into technical specifications
- Participate in data migration, address issues, and recommend improvements
- Define standards and ensure the solution meets data warehousing design standards
Job requirements -
- Bachelor’s/Master’s degree in Computer Science, IT (or equivalent experience)
- 3+ years of experience working as a data engineer (rare exceptions for highly skilled developers)
- Ability to work with large data sets, create data pipelines and ETL applications
- Past experience with designing, implementing, testing, debugging, and deploying data pipelines
- Proficiency with data warehousing tools like - Prefect, Airflow, Glue, Kafka, Serverless(Lambda, Kinesis, SQS, SNS), Fivetran, or Stitch Data/Singer
- Knowledge of database design and data modeling
- Strong grip over statistical programming languages such as Python, R, MATLAB
- Fluency in English for effective communication
- Ability to work full-time (40 hours/week) with a daily 4 hour overlap with US time zones