BI & Data Architect at SiSense
Ramat Gan, IL

Who are we?

We are Sisense; a radically innovative BI company focused on redefining every aspect of business analytics. We love innovation; we always seek to better our solutions and delight our customers. Turning complexity into simplicity is our goal, and we take no less than WOW. Sisense provides a single-stack BI solution, from a blazing fast analytical server that can mash up complex data sets out of various source providers, through a killer analytical product that turns data into actionable insights using proprietary technologies that leave other analytical engines in the dust.

What are we looking for?

We are looking for a killer data specialist having vast familiarity with data sources. One that will be responsible for research, design and dev-education, being up to date and up to the challenge of the ongoing changes and rising technologies in the data world. You’ll serve as an architect for our data team, collaborating to provide highly performing, scalable and extensible connectivity components of our next-gen data connectors.

What will you do?

  • Be part of the team designing and planning data-intensive product features
  • Research and use the latest and greatest Big Data technologies
  • Architect large and complex Sisense deployments

What do you need?

  • Broad view of BI customer needs and the different technologies trying to tackle them.
  • Proven track record in defining, managing and delivering BI projects.
  • 4+ years' experience with modern NoSQL, big-data & distributed Data-warehousing.
  • 6+ years' experience with RDMBS modeling and SQL.
  • 4+ years' experience with enterprise data-warehousing, BI and ETL tools.
  • Experience with web services integration and CRM products.
  • Ability to work in a fast pace growing startup in an agile process.
  • Bachelor’s degree in Computer Science or other relevant tech academic degree.
  • Great attitude & team player.

Advantages:

  • Product management skills.
  • Coding skills in Java/Scala.
  • Experience with the Hadoop ecosystem such as - Spark, Hive, Pig, Impala, etc.
  • Experience with OLAP