Data Engineer

Job description

We are looking for an Data Engineer who is passionate about scalable data architectures, cloud technology, complex data transformations, and handling large data streams on the fly, and wants to see their work translated into real life applications that impact 100.000’s of hotels worldwide.


In a fast-growing scale-up priorities might shift from time to time. You’ll gain experience and ownership in different domains depending on your ambition, but here are some general topics:

  • Implement, maintain and improve streaming and batch data processing pipelines, both to ingest raw data streams from external sources as well as supporting the data transformations needed to power our data-driven products.
  • Set up and maintain infrastructure & tools to support our data processing pipelines.
  • Research and test data processing methodologies, tools and frameworks.
  • Actively contribute to the observability & testability of our data pipelines.


Technologies you’ll work with:

Google Cloud Pub/Sub, BigTable, BigQuery, Spanner, Kubernetes, Python

(feel free to ask us why these are the best fit for us)


About the team

The OTA Insight Data Engineering team in Ghent is the gatekeeper to our vast datasets. Our mission is to reliably transform, integrate and store the numerous data sources that power our BI products. Some datasets are acquired by web scraping or API integrations, others are pushed to us by external vendors. Our main dataset holds 3 trillion (yes that’s 12 zeros) hotel rates. The team works in close collaboration with our Product team, as well as a highly talented group of software engineers, data scientists, and project managers to drive initiatives forward.




About OTA Insight

OTA Insight is a scale-up within the hotel industry. Founded in 2012, with a vision to provide user friendly tools to hoteliers. Today we are considered the global leader in hotel BI and are working with 50,000+ hotels worldwide in 168 countries. As there are more than 1 million hotels worldwide, we are still filled with ambition to grow further.


We generate value to our customers by visualising actionable insights out of our vast datasets. Our tools help hotels to analyse their competition’s room pricing, analyse their hotel revenue, and find out where and when guests are looking and booking. Our products have a profound impact on the day to day activities of our customers, taking away guesswork and simplifying their routines. This has allowed us to grow rapidly over the past few years, adding new products as we solve new problems for our users.


About the offer

As we are a growth company, we offer:

  • A flexible environment that enables you to grow over time and define your role in the way you enjoy it most
  • A compensation that values your work and which we will proactively keep competitive
  • A choice to go green as we take pride in respecting our environment, either with a flexible mobility budget or the best electrical car on the market
  • Ease of mind as we truly care about our team, e.g. by offering the best health & ambulant insurance on the Belgian market
  • A motivation to deliver your best work as we have built a high-bar and very talented team of individuals that are friendly, creative, open-minded and passionate about what they do

Requirements

We are looking for:

  • 2+ years of experience in a data engineering related position.
  • Knowledge of one or more cloud database solutions like BigQuery, Snowflake, BigTable, DynamoDB, Spanner,...
  • Experience with microservice architectures and one or more data streaming systems like Kafka, Google Cloud Pub/Sub or Amazon SNS,...
  • Desire to continually keep up with advancements in data engineering practices
  • Proven experience distilling insights out of a dataset, and normalising datasets according to business requirements.
  • You have played an active role in the data research phase of new projects, and actively contributed the idea generation process.
  • You are always curious about how a technology works under the hood, in order to make a proper assessment of what the ideal use-case for the technology is and understand its limitations.
  • Fluent in English

We welcome:

  • Experience using Python and/or Golang
  • Be familiar with and an advocate of DataOps practices
  • Hands-on experience with one or more data workflow tools like Airflow, Luigi, Azkaban
  • Know your way around in the serverless offering of cloud providers like e.g. GCP or AWS