How common Docker images improve deliverability

Docker, the best community for plug-and-play (one-click images)

If I wanted to build some form of user integration with Your Data Place (YDP) WITHOUT Docker, for say MySQL servers, I'd have to setup something like:

  1. Create a "master" MySQL instance.
  2. Create a database on behalf of a user.
  3. Create a user with specific permissions attached to that user.
  4. Save these credentials to our database.
  5. Write a public-facing SDK which uses a YDP token to access the ability to query/insert/etc.

Which is great, and really quite lean and useable. But, what if I want to do this for 4 database technologies? I'm talking not just MySQL, but PostgresSQL, MongoDB, MariaDB, so-forth. As an engineer that only knows one or two database server management skills, the potential for setting up bad instances rises exponentially with user demand for new services.

How do I plan to address this with YDP? Docker. See, it would be much easier being able to spawn an instance (like the master in the above example) for each user. Why? Because we don't care if they blow up their own instance - it only hurts them, and our value structure is per-compute. By adding Docker into this stack, we could swap around the setup to something like this:

  1. Create a Docker MySQL instance. (This may be a community image like Docker's MySQL one-click, or it may be a YDP custom image that pulls the Docker MySQL image, and installs the YDP required communication tools as well.)

The ideal instance workflow, and expanding on the YDP SDK

So, for arguments sake, take this architecture:

Where both are separated Docker containers (managed by YDP). When someone writes and executes the following Python code in the YDP project:

import ydp
ydp.db.insert("users", {u: "jack", p: "jack"})

What I belive will occur is:

  1. DB.INSERT is called from within the YDP SDK which is provided at runtime.
  2. DB.INSERT checks the provided runtime information to see if there's an attached "database" to this project. 
  3. If a database is found, will use the attached details to connect to the database instance (MySQL) if not already connected and cache it for future use.
  4. Since this is the `insert` shorthand, will craft a SQL statement and insert.
  5. Return the response from the MySQL server to the user.

Below, I will outline potential use-cases and possible code-examples for other common community Docker images.


One-click attach a MongoDB instance to your project, and start inserting documents.

ydp.db.database("business") # could be default db
ydp.db.insert("reviews", {"name": "Your Data Place"})

More comfortable writing your own queries?

client = ydp.db.client
db =
db.insert_one({"name": "Your Data Place"})


For this, I have to get a bit creative, but Nginx could be a one-click load balancer, reverse proxy to a common SDK API, or some form of basic web server. Don't take this too seriously, but here is an example code idea:

ydp.nginx.submit("/about-us", "about_us.html")

Or... making sure a function or code is only ever invoked from a NGINX reverse proxy. This could be tested by having some sort of token which the proxy provides, as well as checking the forwarding headers. If it's a direct call, we can log this to the simple numeric logger.

from_revprox = ydp.nginx.invoked_from_revprox()
if not from_revprox:
    ydp.log.add("Request called directly", 1)


Create an attachable Redis-powered key-value cache for data processing, or so-forth. This could be a super easy-to-use tool that allows data engineers to pipe their shared data into Redis, and pull it from other projects. This could be on a timer of x+5 (scheduler runs this every 5 seconds).

ydp.kv["taken_emails"] = ydp.db.get("SELECT email FROM usernames")

Then, some register function:

body = ydp.req.body
email = body["email"] or False
if email == False:
    ydp.res.json({success: False})
if email in ydp.kv["taken_emails"]:
    ydp.res.json({success: False, message: "Email already taken"})

(I've not used Redis yet, so I'm unsure if this is a common implementation).


Create ElasticSearch servers easily to marry-up with databases. I've not used this ES yet but understand its use, so I'll leave this one for the future.

Amazon Linux

Quickly setup a Amazon gateway to AWS services, to manage EC2, AI, and other AWS functions. An example I've used in the past is:

sentiment ="YDP is awesome!").sentiment
print(f"The positive sentiment is {sentiment.positive}") 

But I feel like this would work better as an AWS implementation using simple keys.

Again, I want to re-iterate on my goal here, it's not to get the people that currently use AWS comprehend sentiment to use my application. No.. It's to make it so easy that you can experiment with comprehend sentiment on your own data with minimal headers/bootstrapping.

Adminer (PHP)

Need a quick way to inspect your MySQL/Mongo database, create tables, and look at data in a user interface? Just one-click install this community container and start connecting to your instances.

The list could go on forever...

Apache Cassandra, CouchDB, GraphDB, Nats, Mongo+MongoExpress, LogStash, Matomo, Drupal, Kapacitor, PHPMyAdmin, Node, who knows what gems exist around there. This is just a theory post, and I'll hopefully have something awesome to showcase soon.

If you'd like to get your hands on this data platform as soon as it's open to the public, make sure to follow this blog for regular updates and voice what you need in a data platform.


Popular posts from this blog

The Petabyte Project, the Most Valuable Idea

Shareable data and code in YDP Projects

Siesta of YDP