Skip to main content

How relevant it is to consider deployment architecture during the design stage of the application?

During the recent assignment, I encountered this question and I decided to answer through this post.

In the olden days of implementation, deployment architecture was the last hurdle before pushing the application to production mode. I sincerely believe that, the equations have changed now a days. Mainly, because, we are moving away from traditional deployment options like purchasing the hardware and managing the data centers or outsource the data center management to some hosting service provider. 

Today, we have deployment options as commodities. I would probably go one step ahead and say, if you are considering to host your application on platforms like GAE, Force.com you do not need to even worry about usual problem scenarios of scalability, availability and fail-over. Be assured that your hosting platform takes care of these aspects.  But PaaS has its own limitations. However, I am in favor of infrastructure services as a commodity as it gives you the control of your infrastructure. You still design your own infrastructure.

Why I would consider the deployment aspects during my design? Does not it take away the abstraction away?
I do not believe in the paradigm anymore, when I used to claim, write your software once, and run anywhere. Because today, even simple applications require handshakes across many applications/infrastructures and built with many technologies.

The cloud options are the prime business drivers. Organizations already know how and where the ir applications are going to be deployed. Your application design should leverage the advantages of this option.  
For example, When I know my deployment is on AWS cloud, I would choose S3 bucket as my storage option, because, I do not need to worry about the fail-over design separately. More over, there will be certain issues involving around S3 towards my content security. Designing the general solution may not address the problem, I opt to design the solution specific to problem, so that I can keep it simple.

It does not take the abstraction away. The solutions should address the issues, quickly and importantly, in a simple manner. If making the solution more abstractive makes the solution complex, then I would not even try that.

Another example I can cite is, say you need a shared file system and your general application design may consider NFS. In AWS, there are options like Gluster, with which you get shared file system with other deployment advantages that you will have to consider separately and address, if you design the system around NFS.

However,  some deployment aspects like scalability, availability, fail-over are common issues to be addressed, why to keep it for last. You may get better options or better way of dealing with it, if you include it in the application design stage. At least you would get good time to analyze!

Welcome your comments.

Comments

Popular posts from this blog

GCP: GAE - Memcache best practices

Memcache is a distributed in-memory data cache in front of or in place of robust persistent storage for some tasks. GAE includes a memory cache service for this purpose. Best practices for using memcache: 1. Handling memcache API failures gracefully; Do not expose errors to the end users 2. Use batching capability of the API when possible 3. Distribute load across your memcache keyspace Use sharding and aggregating for improving performance efficiency. Use TTL (expiration policy) to make sure the memcache does not fill-up indefinitely Use getIdentifiable() and putIfUntouched() for managing the values that may get affected by concurrent updates Use batching (getMulti ("comments", "commented_by") ) to fetch related values together instead of one by one Use graceful error handling

Innate and Non-innate learning

I am reading a book called 'What did you ask at school today?' by Kamala V Mukunda. Would like to share some learning. The book is intended for teachers as primary audience, nevertheless, good for any adult to gain deeper understanding on learning process. She talks about brain structure, innate and non-innate learning aspects and talks about synergy needed between the two in the first two chapters. Firstly, innate learning is something that would not need explicit training. For example, kids learning the language. They wont feel strained or stressed during this kind of learning, just because they enjoy the process, where as non-innate learning focuses more on class room learning. It is accepted that learning through playful means will have more impact on kids than the impact through the structured learning. A physcologist, David Geary puts it this way - while learning through playful means has more impact, children should be encouraged to learn the skills through structure...

Essential GCP services for a new age application

Identity and resource management IAM  Identity aware proxy Resource Manager Stackdriver Monitoring Stackdriver Monitoring: Infrastructure and application monitoring Stackdriver Logging: Centralized logging Stackdriver Error Reporting: Application error reporting Stackdriver Trace: Application performance insights (latency) Stackdriver Debugger: Live production debugging Development management Cloud Deployment Manager: Templated Infrastructure deployment Cloud Console: Web based management console Cloud shell: Browser based terminal/CLI Development tools Cloud SDK: CLI for GCP Container registry: Private container registry Container builder: Build/Package container artifacts Cloud source repository: Hosted private git repository Database services Cloud SQL: Managed MySQL and PostgreSQL Cloud BigTable: HBase compatible non-relational DB Cloud Datastore: Horizontally scalable non-relational (ACID) Cloud Spanner: Horizontally scalable relation...