A daily email of jobs matching your skills and preferences.Sign Up 👋
If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.
At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for three (3) consecutive years.
As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.
In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and a long term incentive plan for employees.
As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:
Design and implement streaming, data lake, and analytics big data solutions
Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines
Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths
Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)
Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software
Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala
Technical Leadership Qualifications
5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics
Expertise in core Hadoop technologies including HDFS, Hive and YARN.
Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
Expert programming experience in Java, Scala, or other statically typed programming language
Ability to learn new technologies in a quickly changing field
Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
Excellent communication skills including proven experience working with key stakeholders and customers
Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
Experience scoping activities on large scale, complex technology infrastructure projects
Customer relationship management including project escalations, and participating in executive steering meetings
A new window will open to the job source site.
Growing a career that's right for you is a life-changer, but it's undeniable that the job search gets tougher every year. With automated hiring processes, resume filters and questionable interview practices, finding a job that a tech skillset has become seriously challenging.
That's where we step in. Careeriscope can help lighten the stress load by making your search a bit easier. We help you find matches based on the job search criteria you set, then send a summary of the results in a daily email sent every morning for review.