I was with VisibleWorld when they started back in 2000; I’m back again and what they’ve done since is pretty amazing.
Heard about programmatic ad buying? Old news – what VisibleWorld is doing is far more innovative…
How about custom, one-to-one programmatic ad buying for viewing at the set-top boxes of cable TV viewers? Yes – people will now get their own ads. Plenty of AdTech companies around – but no one else is doing this.
Cable ad revenues are $70 Billion annually and show no signs of slowing down. Only VisibleWorld is here to take advantage of the opportunity.
Here’s the VisibleWorld technology development environment – do you see {you}?
The API layer on node.js, needs more crafty, experienced software engineers:
In a nutshell, both software engineers need to have already developed craftsman-quality software in node.js – and be as enthusiastic about node.js as we are.
Both will design, implement and deploy robust RESTful APIs in node.js that are internally and externally accessible, paying particular attention to security, authentication, queuing (RabbitMQ) and load balancing/frontends/proxys (nginx).
We use a solid Agile development process using TDD, automated document generation, configuration control through Git and deployment through dedicated build servers.
More specifically, one of the two positions will also involve developing “plugins” in our API framework, so in addition to node.js they’ll bring with them an (a) affinity for interfacing between node.js and other languages (such as GNU C++ and Java) to call 3rd party toolboxes to solve, for example, large scale linear-programming problems, and (b) experience with optimization algorithms such as linear and non-linear programming and having translated these optimization models into production quality code.
The core Hadoop layer, for which we currently have a senior architect/team lead and one engineer, needs more software engineers. We are currently adding 50+ million ad-impression records from digital set-top boxes every day, and the volume is increasing (current size is over 4 Billion data records in the Hadoop cluster).
The new software engineer would work on ingesting new data feeds from sources such as Smart-TVs and build algorithms (Python, MapReduce jobs, Hive) making use of the data.
Examples of such algorithms would be:
…Reporting feeds for customers;
…Nightly update feeds to set-top boxes;
…Aggregation and filtering for export to SQL and visualization on Tableau;
…De-duplication, reach & frequency calculations, monitoring systems (Nagios).
We have one local cluster and we use Amazon EMR for nightly batch jobs.
Development is on Hadoop and uses some dialects of SQL optimized for Hadoop (Hive and Impala both use a version of SQL), so SQL expertise is very important. Native MapReduce jobs are done in Java, so knowledge of core Java is critical. Python and linux scripting are then used to automate tasks and do some streaming Hadoop jobs.
The existing foundation and architecture that we have has proven to be very solid, so there is not much advanced rebuilding/redesign that needs to happen here. However, future plans will encompass using advanced Business Intelligence to find patterns in the data (visualization using for example D3) and advanced predictions, machine learning, optimization, and growth planning.
So is this {you}?
Email me and and let me know (I’ll also take questions). Don’t worry about a resume right now; we’ll be talking code…