Mimic
A verified fake of the Rackspace API
Introductions
Lekha Jeevan
Glyph Lefkotiwz
Ying Li
Thomas Walton
Self Introductions (less than a minute?)
What?
Mimic is a verified fake for Rackspace Apis. The essence of
mimic is to pretend. The first step to using the Rackspace APIs
is through authentication. Mimic lets you authenticate.
Pretending
to authenticate
However, Mimic does not validate credentials - all authentications will
succeed. As with the real Identity endpoint, Mimic's identity
endpoint has a service catalog which includes endpoints for all the
services implemented within Mimic.
On authentication a client will use the service catalog to
look up URLs for its service endpoints. Such a client will only
need two pieces of configuration to begin communicating with the
cloud, i.e. credentials and the identity endpoint. A client
written this way will only need to change the Identity endpoint to
be that of Mimic.
Pretending
to Boot Servers
Lekha: When you ask Mimic to create a server, it pretends to create
one. This is not like stubbing with static responses: when Mimic
pretends to build a server, it remembers the information about that
server and will tell you about it in the subsequent requests.
Pretending
is faster
Lekha: Mimic was originally created to speed things up. So, it was
very important that - it be fast both to respond to requests, and to
have developers setup.
in-memory
Lekha: It uses in-memory data structures.
minimal dependencies
(almost entirely pure Python)
Lekha: with minimal software dependencies, almost entirely pure Python.
Service Dependencies
Lekha: With no service dependencies
Configuration
Lekha: and no configuration
self-contained
Lekha: And is entirely self-contained.
Frontend
Cloud Intelligence via
Protractor
Reach via
capybara
(Selenium web driver)
Lekha: Some of the frontend applications that have started using mimic are Cloud Intelligence, a UI for Cloud Monitoring and Reach, the control panel for most of our products. Both these application use selenium web driver. One with Angular JS and the other with Ruby. These application have integrated with mimic by just changing the application settings file to point to Mimic's Identity as against the real Identity Service. (CLICK)
Some of the backend applications that are using mimic are Autoscale which uses Cloudcafe and Twisted's Trial test framework for its testing and Cloud Monitoring which uses Node JS test libraries, Whiskey and Tape. Even these application have integrated with mimic by just changing the application settings file to point to Mimic's Identity as against the real Identity Service.
APIs supported today
Identity
Compute
Load balancers
Cloud Monitoring
Cloud Queues
Swift
RCV3
Lekha: Mimic was first created for the purposes of Autoscale testing and included a
subset of api calls required by AutoScale, which were Identity (CLICK)
Compute (CLICK) and
Load balancers (CLICK)
and since, we have been implementing plugins as need be.
Which includes Cloud Monitoring, Cloud Queues, Swift and RCV3.
Again, this does not include all the api calls supported by these products,
but subsets.
Demo
Install and run Mimic
Lekha: Now, lets start with a Demo on how to install and use mimic.
Start Your Virtualenvs!
Lekha: Every command after this slide assumes that you have the "Mimic"
virtualenv activated. If we say to run "pip" or "python", it's
assumed to be the "pip" or "python" in *that* virtualenv, not the
one from your system.
Now You Try!
Lekha: It is only a 3 step process! In a virtual env, we
pip install mimic, run mimic and hit the endpoint!
And Mimic returns the Authentication endpoint, to be able to Authenticate,
and get a service catalog containing the (OpenStack) services that Mimic
implements.
(Click) and wait 5 minutes for audience to get Mimic installed
and try it out.
Demo
Nova command-line client
Lekha: Lets see how we can run the python nova command-line client against Mimic
config.sh
export OS_USERNAME=username
export OS_PASSWORD=password
export OS_TENANT_NAME=11111
export OS_AUTH_URL=http://localhost:8900/identity/v2.0/tokens
Lekha: Here is the config file that holds
the environment variables required for the OpenStack
command-line clients.
config.sh
export OS_USERNAME=username
export OS_PASSWORD=password
export OS_TENANT_NAME=11111
export OS_AUTH_URL=http://localhost:8900/identity/v2.0/tokens
Lekha: We have set a random username, password
and tenant name, as Mimic only pretends to authenticate
config.sh
export OS_USERNAME=username
export OS_PASSWORD=password
export OS_TENANT_NAME=11111
export OS_AUTH_URL=http://localhost:8900/identity/v2.0/tokens
Lekha: And the Auth url is set to be that of Mimic.
Now, let's continue where we left off with our first demo. So we
already have an instance of mimic running.
Now You Try!
Lekha: Let's pip install the python nova-client and ensure the
config file has the AUTH_URL pointing to that of Mimic. We source
the config file and we see that no servers exist on Mimic start up! Let's
create a server with a random flavor and image. The server created
is in an active state. Lets create a second server, which
also is built immediately and is an active state. Now we have 2
active servers that Mimic knows of. Lets delete the second
server... and now Mimic knows of the deleted server and has only
the one server remaining.
(Click) and wait 5 minutes for audience to try.
✈
Lekha: You will see how fast testing server creation is. Now imagine your dev VMs
configured to run tests against Mimic.
One of our devs from the Rackspace Cloud Intelligence
team, calls this "Developing on Airplane Mode!", as mimic enables us to work
offline without having to worry about uptimes of the upstream
systems and get immediate feedback on the code being written.
BUILD → ACTIVE ERROR ACTIVE
Lekha: However, there is one other issue we run into, Tests failures due to raandom upstream failures.
Like a test would expect a building server to go into an 'active' state,
but it would (CLICK) go into an ERROR state
unknown errors
Lekha: And tests for such negative scenarios, like actually testing how
your application would behave if the server did go into 'error' state,
cannot be tested. This is something that could not be
reproduced consistently.
Mimic simulates errors
Lekha: Mimic helps reproduce such scenarios so your application is
programmed to react to such unexpected intermittent failures
Error injection using metadata
Glyph: So, we had the one active server. Now, lets create a
server with the `metadata`: `"server_building": 30`. This will
keep the server in build state for 30 seconds. Now we have 2
servers. The active and building sever. Also, We can create a server
that goes into an error state, using the `metadata`: `"server_error":
True`. As you can see, we now have 3 different servers, with 3
different states.
/mimic/v1.1/tick
Glyph: Instead of simply waiting 30 seconds, you can hit this
second out-of-band endpoint, the "tick" endpoint ...
{
"amount": 30.0
}
Glyph: with a payload like this. It will tell you that time has
passed, like so:
{
"advanced": 30.0,
"now": "1970-01-01T00:00:30.000000Z"
}
Glyph: Now, you may notice there's something a little funny about
that timestamp - it's suspiciously close to midnight, january
first, 1970. Mimic begins each subsequent restart thinking it's
1970, at the unix epoch; if you want to advance the clock, just
plug in the number of seconds since the epoch as the "amount" and
your mimic will appear to catch up to real time.
{
"server": {
"status": "BUILD",
"updated": "1970-01-01T00:00:00.000000Z",
"OS-EXT-STS:task_state": null,
"user_id": "170454",
"addresses": {},
"...": "..."
}
}
Glyph: If you've previously created a server with "server_building"
metadata that tells it to build for some number of seconds, and you
hit the 'tick' endpoint telling it to advance time the
server_building number of seconds...
{
"server": {
"status": "ACTIVE",
"updated": "1970-01-01T00:00:01.000000Z",
"OS-EXT-STS:task_state": null,
"user_id": "170454",
"addresses": {},
"...": "..."
}
}
Glyph: that server (and any others) will now show up as "active",
as it should. This means you can set up very long timeouts, and
have servers behave "realistically", but in a way where you can
test several hours of timeouts a time.
--realtime
Glyph: You can ask Mimic to actually pay attention to the real
clock with the --realtime
command-line option; that
disables this time-advancing endpoint, but it will allow any test
suites that rely on real time passing to keep running.
https://github.com/rackerlabs/mimic
Lekha: Now that we know what Mimic can do, let's hack on it to make
it do more.
Running Mimic's tests
git clone https://github.com/[you]/mimic
cd mimic
pip install tox
tox -e lint -e py27
Lekha: Now, lets clone the fork you made. And go into the directory. In
your virtualenv, install tox and use it to run mimic's test
suite. wait 5 minutes.
Lekha: Mimic is in the business of responding to HTTP requests, and in
order to do that it uses a web framework. The web framework in
question is Klein, which is a micro-framework for developing
production-ready web services with Python. Klein is a thin wrapper
around twisted.web
, so you'll have to use the Request
interface from twisted.web quite extensively to use it.
Klein Demo
# my-server.py
from klein import run, route
@route('/')
def home(request):
return b'Hello, world!'
run("localhost", 8080)
... and then ...
python my-server.py
curl http://localhost:8080/
Lekha: Luckily, Klein is super easy to use! This is a full, working
example of how to use Klein to return some static bytes.
Klein Demo
import json
from klein import run, route
@route('/')
def home(request):
request.setResponseCode(200)
body = {"hello": "world"}
return json.dumps(body)
run("localhost", 8080)
... and then ...
python my-server.py
curl http://localhost:8080/
Lekha: For a slightly more realistic example, since you're going to be
dealing with a lot of JSON if you're using Mimic, this is how you
would serialize a response with some JSON in it and set a response
code. You can give this a try pretty much as it is shown on this
slide.
Plugins!
Lekha: When we come back from a 5 minute break, we'll get into the main
event, which is how to write a Mimic plugin that mocks a new
service of your choice.
# mimic/plugins/my_plugin.py
from mimic.rest.my_api import MyAPIMock
the_mock_plugin = MyAPIMock()
Lekha: To register your plugin with Mimic, you just need to drop an
instance of it into any module of the mimic.plugins
package. In your fork of mimic, you can put a file like this one
into the mimic/plugins
directory. We haven't
implemented MyAPIMock
yet, but we will in a
moment.
Everybody please create a file like this now. (Wait 1 minute.)
Raise your hand when you've got it.
# mimic/test/test_core.py
# CoreBuildingTests.test_from_plugin_includes_all_plugins
plugin_apis = set((nova_plugin.nova,
loadbalancer_plugin.loadbalancer,
swift_plugin.swift,
queue_plugin.queue,
maas_plugin.maas,
rackconnect_v3_plugin.rackconnect,
my_plugin.the_mock_plugin ))
Lekha: Mimic prides itself with a 100% test coverage and is big on TDD. So lets start with a test.
Modify the test_from_plugin_includes_all_plugins
test to include your mock plugin, like this.
Run The Tests
tox -e py27
Lekha: Now lets run tests.
(Watch Them Fail)
Traceback (most recent call last):
File ".../site-packages/twisted/plugin.py", line 167, in getCache
provider = pluginModule.load()
File ".../site-packages/twisted/python/modules.py", line 383, in load
return self.pathEntry.pythonPath.moduleLoader(self.name)
File ".../site-packages/twisted/python/reflect.py", line 303, in namedAny
topLevelPackage = _importAndCheckStack(trialname)
File ".../site-packages/twisted/python/reflect.py", line 250, in _importAndCheckStack
reraise(excValue, excTraceback)
File ".../Mimic/mimic/plugins/my_plugin.py", line 3, in <module>
from mimic.rest.my_api import MyAPIMock
exceptions.ImportError: No module named my_api
Make Them Pass
Lekha: In order to make the tests pass, you'll need to understand a little
bit about what Mimic expects from its plugins, so we'll explain how
they relate to Mimic. Afterwards, you should be able to create a
functioning plugin that passes some tests and also shows up in the
service catalog when you authenticate to Mimic.
Identity
Is the Entry Point
(Not A Plugin)
Glyph: Mimic's Identity endpoint is the top-level entry point to
Mimic as a service. Every other URL to a mock is available from
within the service catalog. As we were designing the plugin API,
it was clear that this top-level Identity endpoint needed to be the
core part of Mimic, and plug-ins would each add an entry for
themselves to the service catalog.
http://localhost:8900/mimicking/ NovaApi-78bc54/ORD/ v2/tenant_id_f15c1028/servers
Glyph: URLs within Mimic's service catalog all look similar. In
order to prevent conflicts between plugins, Mimic's core one
encodes the name of your plugin and the region name specified by
your plugin's endpoint. Here we can see what a URL for the Compute
mock looks like. (CLICK) This portion of the URL, which identifies
which mock is being referenced, is handled by Mimic itself, so that
it's always addressing the right plugin. (CLICK) Then there's the
part of the URL that your plugin itself handles, which identifies
the tenant and the endpoint within your API.
Plugin Interface: “API Mock”
Glyph: Each plugin is an API mock, which has only two methods:
class MyAPIMock():
def catalog_entries(...)
def resource_for_region(...)
(that's it!)
Glyph: (click) catalog_entries
(click) and resource_for_region
(click) That's it!.
def catalog_entries(self,
tenant_id):
Glyph: catalog_entries
takes a tenant ID and returns the
entries in Mimic's service catalog for that particular API mock.
APIs have catalog entries for each API type, which in turn have
endpoints for each virtual region they represent.
return [
Entry (
tenant_id , "compute" , "cloudServersOpenStack" ,
[
Endpoint(tenant_id, region="ORD" ,
endpoint_id=text_type(uuid4()),
prefix="v2" ) ,
Endpoint(tenant_id, region="DFW" ,
endpoint_id=text_type(uuid4()),
prefix="v2" )
]
)
]
Glyph: This takes the form of an iterable of a class called
(CLICK) Entry
, each of which is (CLICK) a tenant ID,
(CLICK) a type, (CLICK) a name, (CLICK) and a collection of
(CLICK) Endpoint
objects, each (CLICK) containing (CLICK)
the name of a pretend region, (CLICK) a URI version prefix that
should appear in the service catalog after the generated service
URL but before the tenant ID.
def resource_for_region (
self, region , uri_prefix ,
session_store
) :
return (MyRegion(...)
.app.resource())
Glyph: resource_for_region
takes (CLICK) the name of a
region, (CLICK) a URI prefix - produced by Mimic core to make URI
for each service unique, so you can generate URLs to your services
in any responses which need them - (CLICK) and a session store
where the API mock may look up state of the resources it pretended
to provision for the respective
tenants. (CLICK) resource_for_region
returns an HTTP resource
associated with the top level of the given region. This resource
then routes this request to any tenant- specific resources
associated with the full URL path.
class MyRegion():
app = MimicApp()
@app.route('/v2/<string:tenant_id>/servers',
methods=['GET'])
def list_servers(self, request, tenant_id):
return json.dumps({"servers": []})
Glyph: Once you've created a resource for your region, it has a
route for the parts of the URI that starts at the end of the URI
path. Here you can see what the nova "list servers" endpoint would
look like using Mimic's API; as you can see, it's not a lot of work
at all to return a canned response. It would be a little beyond
the scope of this brief talk to do a full tutorial of how resource
traversal works in the web framework that Mimic uses, but hopefully
this slide - which is a fully working response - shows that it
is pretty easy to get started.
Tell Mimic
To Load It
Glyph: Now that we have most of a plugin written, we can run those
tests again.
# mimic/plugins/my_plugin.py
from my_api import MyAPIMock
the_mock_plugin = MyAPIMock()
Glyph: We already told Mimic to load the new plugin with the file you
dropped in earlier.
mimic/rest/nova_api.py
mimic/rest/maas_api.py
mimic/rest/swift_api.py
mimic/rest/rackconnect_v3_api.py
Glyph: Rather than provide a fake example, the real examples that Mimic
contains should be a pretty good starting point for you. You can
see IAPIMock implementations in all of these 4 files, and several
more in the mimic.rest package.
We will be circling the room to answer questions and help out.
(Allow EXACTLY 23 minutes for implementation.)
tox -e py27
Glyph: When you're done implementing, that plugin-loading test should pass.
Tenant Session
(remembered until restart)
Glyph: This, of course, just shows you how to create ephemeral,
static responses - but as Lekha said previously, Mimic doesn't just
create fake responses; it remembers - (CLICK) in memory - what
you've asked it to do.
session = session_store.session_for_tenant_id(tenant_id)
class MyMockData():
"..."
my_data = session.data_for_api(my_api_mock,
MyMockData)
Glyph: That "session_store" object passed to resource_for_region is
the place you can keep any relevant state. It gives you a
per-tenant session object, and then you can ask that session for
any mock-specific data you want to store for that tenant. All
session data is created on demand, so you pass in a callable which
will create your data if no data exists for that tentant/API pair.
session = session_store.session_for_tenant_id(tenant_id)
from mimic.plugins.other_mock import (other_api_mock,
OtherMockData)
other_data = session.data_for_api(other_api_mock,
OtherMockData)
Glyph: Note that you can pass other API mocks as well, so if you
want to inspect a tenant's session state for other services and
factor that into your responses, it's easy to do so. This pattern
of inspecting and manipulating a different mock's data can also be
used to create control planes for your plugins, so that one plugin
can tell another plugin how and when to fail by storing information
about the future expected failure on its session.
Errors As A Service
Glyph: We are still working on the first error-injection endpoint
that works this way, by having a second plugin tell the first what
its failures are, but this is an aspect of Mimic's development we
are really excited about, because that control plane API also
doubles as a memory of the unexpected, and even potentially
undocumented, ways in which the mocked service can fail.
Error Injection
Glyph: We've begun work on a branch doing this for Compute, but we
feel that every service should have the ability to inject arbitrary
errors.
Error Injection
Metadata-Based
Glyph: As Lekha explained, Mimic can inject some errors into the
Nova mock by supplying metadata within a request itself.
Error Injection
Metadata-Based: In-Band
Glyph: However, this means that in order to cause an error to
happen, you need to modify the request that you're making to mimic,
which means your application isn't entirely unmodified.
Error Injection
Control-Plane-Based
Glyph: What we've started to do, and would like to do more of in
the future, is to put an error-injection control plane into the
service catalog for each mock, with a special entry type so that
your testing infrastructure can talk to it. We have one for the
Nova mock, and we'd love it for you to contribute more for the
other mocks.
Error Injection
Control-Plane-Based: Out-Of-Band
Glyph: Using the behavior control plane, your testing tool can
authenticate to mimic, and tell Mimic to cause certain upcoming
requests to succeed or fail before the system that you're testing
even communicates with it. Your system would not need to relay any
expected-failure data itself, and so no metadata would need to be
passed through.
Error Injection
Future: With Your Help
Glyph: What we'd really like to build with these out-of-band
failures, though, is not just a single feature, but an API that
allows people developing applications against Rackspace APIs, both
internally and externally, to make those applications as robust as
possible by easily determining how they will react at scale, under
load, and under stress, even if they've never experienced those
conditions. So we need you to contribute the errors and behaviors
that you have experienced.