Skip to main content

Point of view on TV Application Layer

With the internet everywhere, content aggregators, broadcasters, and content producers are compelled to find new delivery channels. SMART TV happens to be the next trend riding the wave. Ofcourse, we need to see how SMART TV concept catches up in India as there is cost barrier. 
SMART TVs are finding their ways in west, however, it remains to be seen whether they can double up as an effective delivery medium. In my view, the major concern is that, the user still has to switch to TV application from main screen to access. As a viewer, I don't find it convenient to navigate, every time I need to access TV app, rather I user my tablet to access app.

I wished, TV applications being displayed alongside main screen, more as a companion one and had access to the context of the content being broadcast. This is the tricky place to be in and I am not sure, it is going to be reality because of content protection policies. However, one thing is for sure, the broadcasters are sitting on golden mine.

Huge monetizing opportunity awaiting ... for both, content broadcasters, and TV app developer community.

Coming to SMART TVs, there is no standards for TV app and running the TV app on different manufacturer requires re-writing, re-compiling the code. Heavily discouraging for the developer community that develop TV app.

At Mindtree, we evaluated TV application layer, an open source framework from BBC attempting to provide abstraction layer with an intent of writing TV application once, and run them on any connected SMART TV.

Our evaluation article can be found here.

Comments

Popular posts from this blog

Key to adopt open source product

Friends, I am working on business solution implementation on open source product called Kaltura. Kaltura is a media management solution and has loads of features that compel any business to take a peek into it. More-over this is the only complete end-to-end open source software available to handle digital assets. But it comes with its own head ache. Considering its open source, its understandable. I feel, handling these would ensure you the success in your open source product implementation. 1. In my opinion, before adopting any open source software, build the capability to deal with the inconsistency bundled in the open source software. 2. I would avoid involving external consultants for 2 reasons.      a. I am not sure, they would bring necessary expertise on to table      b. I fear that there would be little ownership, they will not see big picture of my business (neither I am interested to share it all) 3. Alternative to that is to build the tea...

GCP: Instance group

Managed instance group contains identical instnaces, created from an instance template. Supports auto scaling, auto healing, rolling updates, load balancing. VM instances are stateless and disks are deleted on VM recreation. It is possible for load balancer to send traffic to instance group through a named port. Configure autoscaling on and autoscaling policy. Auto scaling policy directs when to auto-scale, based on CPU utlization, HTTP load balancing utilization, Stack driver metric, or combination of the above. It is possible to specify maximum number of instances that can be in instance group. Enable auto healing through healhcheck configuration. Instance group while running, you can add/remove labels. You can do modifications just as you can do with an instance running. With instance group you can have rolling update, rolling restart/replace actions. When performing rolling update, perform canary deployment, and make sure that no rollback is needed. If rollback is needed, t...

GCP: GAE - Memcache best practices

Memcache is a distributed in-memory data cache in front of or in place of robust persistent storage for some tasks. GAE includes a memory cache service for this purpose. Best practices for using memcache: 1. Handling memcache API failures gracefully; Do not expose errors to the end users 2. Use batching capability of the API when possible 3. Distribute load across your memcache keyspace Use sharding and aggregating for improving performance efficiency. Use TTL (expiration policy) to make sure the memcache does not fill-up indefinitely Use getIdentifiable() and putIfUntouched() for managing the values that may get affected by concurrent updates Use batching (getMulti ("comments", "commented_by") ) to fetch related values together instead of one by one Use graceful error handling