Service Bindings, Load Balancing, and System Status
This post was originally published on the Predix Developer Network Blog September 8, 2017.
I’m taking a few moments this Friday to focus your attention on a few worthy posts that came across the Predix Developer Forums. Reach out to me if you want to see specific topics covered.
Service Binding
There was a conceptual question about binding an app to a service on the forums this week that started some conversations.
Why bind at all?
From the Cloud Foundry documentation:
Depending on the service, binding a service instance to your application may deliver credentials for the service instance to the application.
It is not always necessary to bind, but doing so is a best practice for a 12 factor app:
A backing service is any service the app consumes over the network as part of its normal operation.
Advantages of this approach include the ability to swap out a resource accessed by a URL with other instances. Since your application is designed to be state-less it can run in a scaleable container across multiple evnrionments such as dev, qa, and prod environments. By binding the application to its services, the URLs and Predix-Zone-Ids can be pulled from environment variables at runtime rather than configured into your app. Cloud Foundry will know what VCAP variables to provide by establishing this binding from a provisioned service.
Also see the Port binding and Dev/prod parity for more details.
How do you bind?
You can do this directly from your manifest. If you use a tool like the Python SDK to build your manifest this can be done for you and have local / cloud parity. A manifest might look like:
---
applications:
- name: manifest-exampleservices:
- dev-predix-uaa-free
- dev-predix-timeseries-free
- dev-predix-asset-free
- dev-logstash-17-free
You can also do it manually on the commandline with cf bind-service
, though that is less portable/reproducible for other environments (ie. spaces).
There are some more examples in the forums for Java binding and you can share your own thoughts. There is also a new How to Deploy App on Multiple Spaces post that talks about using manifests across multiple environments.
Network Architecture
Predix Cloud is not unlike some other Network Architectures that are orchestrated with an AWS Elastic Load Balancer (ELB). For another example see cloud.gov. On the forums a great response from Siva Balan answers with insight into how this can cause timeouts for idle connections.
From AWS ELB documentation:
By default, Elastic Load Balancing sets the idle timeout value to 60 seconds. Therefore, if the target doesn’t send some data at least every 60 seconds while the request is in flight, the load balancer can close the front-end connection. To ensure that lengthy operations such as file uploads have time to complete, send at least 1 byte of data before each idle timeout period elapses, and increase the length of the idle timeout period as needed.
What this means is that for large file uploads, sensors that only periodically transfer data, etc. in US-WEST may experience connection resets or other request time outs.
Don’t let your Microservice get put in timeout. If you have other thoughts on how to design a solution using web sockets, asynchronous services or messaging-oriented architectures jump into the conversation for Not getting httpresponse back in restful after 2.5 minutes.
You May Have Missed
- You can find the status of system services from https://status.predix.io/ …more
- Resources for learning about Predix System Architecture …more
- Exporting or migrating data from Database as a Service (ie. Postgres)… more
Hope that helps.