← Back to team overview

canonical-ci-engineering team mailing list archive

[Michael Nelson] Re: [Ubunet-discuss] Stork Service

 

Hi guys, I found that mail quite enlightening about what swift provides
in a context where people are sharing their views about how it applies
to a specific context.

By squinting hard enough I could recognize some patterns/features we
care about.

And even if not everything applies to our case, it's good to know who
knows more than us about swift ;)

Happy reading !

      Vincent

--- Begin Message ---
Woops, replied directly to Lukasz.


On Tue, Dec 17, 2013 at 9:54 AM, Michael Nelson
<michael.nelson@xxxxxxxxxxxxx> wrote:
> On Fri, Dec 13, 2013 at 4:50 PM, Łukasz Czyżykowski
> <lukasz.czyzykowski@xxxxxxxxxxxxx> wrote:
>> Hi All,
>>
>> After latest discussion with Stephen about how Stork should work.
>
> Hey guys. I think the motivation behind stork (ie. make it easy for
> different projects within canonical to serve and share static media in
> a standard way) is excellent, but I'm wondering if we can build around
> existing infrastructure, particularly Openstack swift.
>
> As a result of some of the work I've been doing lately, I've been
> thinking about how we can use openstack swift for serving uploaded and
> static media. With that in the back of my head, it seems to me that
> Swift will already do a lot of what you seem to want to implement with
> stork (particularly, storing and serving static media, and a cli
> interface).
>
> The only extra functionality that I understand stork to have that
> swift doesn't have is the convoy combo-loading service - easy to
> provide either way.
>
> Just to frame this discussion, the swift service is basically haproxy
> in front of swift-storage backends. When you upload content to a
> container, it's stored at least 3 times on different backends and
> served via the proxy. Publicly readable containers can be used to
> serve public static content. Although rackspace's swift is CDN
> enabled, the default swift (and canonistacks) is not.
>
>
> So, with that in mind...
>
>>
>> Stork as a service would serve static, pre-build css and js assets.
>
> Swift as a service would serve static files (images, pre-built css and
> js assets).
>
>
>> Building
>> and testing of those assets would be handled by the specific projects,
>> outside of Stork, and only the resulting builds would be send to it.
>
> outside of swift, and only the resulting builds (of static content)
> would be uploaded to swift.
>
>>
>> The process could look like that:
>>
>> In your asset project (all of that could be put into a makefile)
>>
>> run tests, make sure they pass
>> build assets
>> push build directory together with the version number to Stork (e.g.: $
>> stork upload ubuntuone-assets r567)
>
> When the project is built by webops, the static media is synced to a
> new container for that project, using the revision number 'r567' as
> the container name.
>
>
>>
>> Stork
>>
>> puts received files in a directory with the supplied version (e.g.:
>> /assets/ubuntuone-assets/r567)
>> registers it in the central db? (could be that just the file system dir
>> would be enough)
>
> Puts the received files into a new container named with the supplied
> version 'r567'. This registers the revision with swift, so that
> listing the containers for the given project (tenant) will now include
> the r567 container.
>
>
>> is ready to server them with convoy
>
> So that's the one thing that swift obviously won't do. I'd be
> interested to know how hard it would be to setup a convoy instance
> that combines static css/js from a swift container. I don't think it
> would be hard, but could be the more interesting part of this.
>
>>
>>
>> Stork utility
>> To enable other projects to use Stork service, small cli utility would be
>> needed. It would be responsible for:
>>
>> Upload new project build
>
> The swift utility can do this work for us. Possibly using swift's bulk
> operations (extract archive), although I'm not sure if that's
> available on canonistack swift.
>
> http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.bulk
>
>
>> List existing versions for a given project
>
> The swift utility will list the existing containers for a given
> project (tenant).
>
>
>> Delete old versions (?)
>
> The swift utility can delete old containers.
>
>>
>>
>> Problems solved by that approach
>>
>> The asset building process can use whatever method or tool necessary.
>> The building process can depend on bits which doesn't have to be installed
>> on Stork server (no need for Ruby or NodeJS)
>
> +1 - sounds good.
>
>> Each project can manage it's own versions (add, remove, when it's needed),
>> without the need for Stork re-deployment.
>
> I think this would be webops managed, but yes, without the need for a
> Stork redeployment.
>
>> Adding new asset project is straightforward and doesn't require
>> re-deployments.
>
> Yep.
>
>>
>>
>> Questions
>>
>> Is it feasible to give to developers access to uploading static assets to
>> the production server?
>
> I don't think that would be a good idea, but adding the static assets
> to swift as part of the build would make sense.
>
>> Authentication (and encryption) for the API access between cli tool and
>> server?
>
> So, if we use swift as the storage backend here, as well as the
> service serving the public assets, I don't think we'd need a server
> (or a separate cli?). All we'd need is convoy serving combined assets
> from swift.
>
>> Should deletion of olf versions be reserved for administrators?
>
> Yep, I think deleting old containers should be a webops task.
>
>>
>>
>> Any input welcomed :)
>
> Hope that's useful at least for some discussion before moving ahead -
> whatever the direction ends up being.
>
> Cheers,
> Michael

_______________________________________________
Mailing list: https://launchpad.net/~ubunet-discuss
Post to     : ubunet-discuss@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~ubunet-discuss
More help   : https://help.launchpad.net/ListHelp

--- End Message ---