Skip to main content

Upload to YouTube using MRSS feed

In this blog, I will talk about one of the requirements related to YouTube integration. Here's the context. Your customer will publish RSS feed and your customer does not want to manually log into YouTube site and upload. The customer wants the uploading activity to be automated. He wants you to design loosely coupled application.

This is one of the typical integration requirements in the media space. Google provides YouTube APIs through which one can build stand-alone application.

Here is one of the solutions that can be implemented. In this case, design a stand-alone YouTubeUploader application that can be scheduled through Cron job. While YouTube enables developers with APIs, authentication mechanisms, client libraries, it is important to segregate roles and responsiblity of your classes. In my solution, I will make YouTubeUploader as main class which can be scheduled through cron job. This class invokes FeedParser which can access feed through http URL. Usually, the publishers now a days, use MRSS feed to syndicate the content.

Let your FeedParser parse the MRSS feed and persist in database to make sure that duplicate entries are not persisted. Develop MediaContentDownloader to download the binary content through http URL into temporary folder. Finally, develop MediaContentUploader to upload the binary content to YouTube site. Make sure to define the customer specific configuration with regard to YouTube credentials, Feed URL.

Some tips for YouTube direct Upload.

1. Use ClientLogin authentication

2. Use direct and resumeable method (first request for Upload with metadata and subsequent requests for uploading actual binary content)

3. Persist target 'Location' in a database, so that this can be used while resuming upload

4. For duplicate content check, one can produce md5 digest and persist in the database against an entry. Before uploading, make sure to create md5 digest and verify against the md5 digests persisted in the database. However, this process may have implications on performance.

Deployment diagram for your reference ...

Comments

Popular posts from this blog

Key to adopt open source product

Friends, I am working on business solution implementation on open source product called Kaltura. Kaltura is a media management solution and has loads of features that compel any business to take a peek into it. More-over this is the only complete end-to-end open source software available to handle digital assets. But it comes with its own head ache. Considering its open source, its understandable. I feel, handling these would ensure you the success in your open source product implementation. 1. In my opinion, before adopting any open source software, build the capability to deal with the inconsistency bundled in the open source software. 2. I would avoid involving external consultants for 2 reasons.      a. I am not sure, they would bring necessary expertise on to table      b. I fear that there would be little ownership, they will not see big picture of my business (neither I am interested to share it all) 3. Alternative to that is to build the team that is capable of debuggin

Secure your application on cloud

Handling sensitive data Define sensitive data for your application. Classify as sensitive data and confidential data. Sensitive data is something like password, credit card account number, something that you should not compromise at all. Confidential data could be your customer’s health record, something that requires your permission before its usage. So, you need to define sensitive data in the context of your application. There are many ways to protect the sensitive data in transit; the easiest way is to use SSL. This is nothing different than handling sensitive data in any traditional application.   However, make sure you apply this rule while designing your application for cloud deployment. Alternatively, you can encrypt the sensitive data and transport. Be noted that any kind of protection you design, will have implications on performance. However this is ignorable considering the nature of sensitive data. If you just want to protect your data from being tampered during

Sub-netting: Divide a network into 2 or more networks

Points to keep in mind: 1. Computers that belong to a subnet are addressed with an identical most-sig bit group in the IP addresses 2. IP is logically divided into NetworkIdentifier/Routingprefix and Restfield/HostIdentifier 3. Routing prefix can be expressed as CIDR (Classless Inter-Domain Routing) notation. Ex. 198.51.100.0/24 Implies NetworkIdentifier has 24 bits allocated and Rest field has 8 bits allocated 4. A network is characterized by subnet mask or netmask, applied by bitwise AND operation Ex. For 198.51.100.0/24, the subnet mask is 255.255.255.0 Subnet masks are used to identify the networks *If 198.51.100.0 is NetworkIdentifier, then 198.51.100.255 is BroadcastIdentifier For Class A, mask is 255.0.0.0; Class B, mask is 255.255.0.0; Class C, mask is 255.255.255.0 5. Traffic is exchanged between subnets through routers, when the routing prefixes of the source and destination addresses differ; A router serves as a logical boundary between the subnets. Advantag