DISQO is a next-generation consumer insights platform. We provide the highest quality consumer data to the world's largest market research agencies, analytics companies, and brands. We operate one of the world's largest true consumer insights panels. This data helps our clients understand user behavior, build better experiences, and make better decisions. We utilize cutting-edge technology and innovative, out-of-the-box strategies to collect and analyze insights which help shape the products and services of tomorrow.
This is a great opportunity to join a fun, exciting & highly motivated team and upgrade your skills while creating real impact. We use a modern tech stack and cloud infrastructure. We are not only looking for work experience, but rather the willingness to step up to challenges and the ability to learn quickly.
We believe the best software is written and managed by small teams that know how to make the impossible possible. We use agile software development techniques and modern tools to focus our efforts on solving our business goals. We use OKR’s to track everything we do. We deliver early and often. We obsess over our code, architecture, and infrastructure. And we believe that these practices lead to higher quality products.
· Design and implement complex technical solutions
· Design, build, manage and optimize data pipelines for data structures encompassing data transformation, data models, schemas, metadata, data quality
· Develop complex data integration processes, data transformation
· Ensure that the extracted and displayed information/data meets the business requirements of the organization
· Analyze data and enable machine learning
· Write jobs with Java or Scala
· Query data via SQL
· Participate in data quality assurance.
· BA in Computer Science, Engineering, Mathematics or related field
· At least 4 years of relevant work experience as a Data Engineer
· At least 4 years experience with Java
· At least 1 year experience with Scala
· At least 2 years experience with Spark
· At least 2 years experience with databases
· At least 2 years experience with AWS EMR, Lambda
· More than 2 years experience in streaming with Kafka or Kinesis
· Advanced ability to draw insights from data and clearly communicate them as required
· An ability and interest in working in a fast-paced and rapidly-changing environment
· Advanced written and verbal English communication skills
· Experience with AirFlow is a plus
· Experience with AWS s3, ec2, cloud watch is a plus
· Experience with gitlab-ci is a plus.