Skip to main content

GCP: GAE - Memcache best practices

Memcache is a distributed in-memory data cache in front of or in place of robust persistent storage for some tasks.
GAE includes a memory cache service for this purpose.

Best practices for using memcache:
1. Handling memcache API failures gracefully; Do not expose errors to the end users
2. Use batching capability of the API when possible
3. Distribute load across your memcache keyspace


  • Use sharding and aggregating for improving performance efficiency.
  • Use TTL (expiration policy) to make sure the memcache does not fill-up indefinitely
  • Use getIdentifiable() and putIfUntouched() for managing the values that may get affected by concurrent updates
  • Use batching (getMulti ("comments", "commented_by") ) to fetch related values together instead of one by one
  • Use graceful error handling

Comments

Popular posts from this blog

Key to adopt open source product

Friends, I am working on business solution implementation on open source product called Kaltura. Kaltura is a media management solution and has loads of features that compel any business to take a peek into it. More-over this is the only complete end-to-end open source software available to handle digital assets. But it comes with its own head ache. Considering its open source, its understandable. I feel, handling these would ensure you the success in your open source product implementation. 1. In my opinion, before adopting any open source software, build the capability to deal with the inconsistency bundled in the open source software. 2. I would avoid involving external consultants for 2 reasons.      a. I am not sure, they would bring necessary expertise on to table      b. I fear that there would be little ownership, they will not see big picture of my business (neither I am interested to share it all) 3. Alternative to that is to build the tea...

GCP: Instance group

Managed instance group contains identical instnaces, created from an instance template. Supports auto scaling, auto healing, rolling updates, load balancing. VM instances are stateless and disks are deleted on VM recreation. It is possible for load balancer to send traffic to instance group through a named port. Configure autoscaling on and autoscaling policy. Auto scaling policy directs when to auto-scale, based on CPU utlization, HTTP load balancing utilization, Stack driver metric, or combination of the above. It is possible to specify maximum number of instances that can be in instance group. Enable auto healing through healhcheck configuration. Instance group while running, you can add/remove labels. You can do modifications just as you can do with an instance running. With instance group you can have rolling update, rolling restart/replace actions. When performing rolling update, perform canary deployment, and make sure that no rollback is needed. If rollback is needed, t...

Essential GCP services for a new age application

Identity and resource management IAM  Identity aware proxy Resource Manager Stackdriver Monitoring Stackdriver Monitoring: Infrastructure and application monitoring Stackdriver Logging: Centralized logging Stackdriver Error Reporting: Application error reporting Stackdriver Trace: Application performance insights (latency) Stackdriver Debugger: Live production debugging Development management Cloud Deployment Manager: Templated Infrastructure deployment Cloud Console: Web based management console Cloud shell: Browser based terminal/CLI Development tools Cloud SDK: CLI for GCP Container registry: Private container registry Container builder: Build/Package container artifacts Cloud source repository: Hosted private git repository Database services Cloud SQL: Managed MySQL and PostgreSQL Cloud BigTable: HBase compatible non-relational DB Cloud Datastore: Horizontally scalable non-relational (ACID) Cloud Spanner: Horizontally scalable relation...