--- Begin Message ---
On Wed, Dec 18, 2013 at 11:29 AM, Łukasz Czyżykowski
<lukasz.czyzykowski@xxxxxxxxxxxxx> wrote:
> To make it work with convoy you still have to have it as a service facing
> the public
That shouldn't be a problem for public containers. I've got an RT
about that for the container that I want to setup for sca uploaded
media - Apache/Squid frontend to swift-proxy:
https://portal.admin.canonical.com/66200
>. In that case, what would be the benefit of using swift over any
> other storage backend?
Isn't the question really, what would be the benefit of using swift
over creating our own stork storage service (based on apache + (?) +
convoy) - that would be obvious right? Or do you mean what's the
benefit of swift over some other storage and delivery service already
setup in our infrustructure (I don't know of anyway). Let me know
whether I've misunderstood your question and I'll try to answer more
specifically :-)
> CLI client?
python-swift provides a swift client for accessing your containers/objects etc.
>
> Are swift urls predictable? I mean, will you be able to compute the static
> url based on the project name and rev?
Yes - the public url will be via an apache proxypass. I'll be landing
an example in sca first thing in the new year (for the uploaded media)
so we can chat about that more then.
-Michael
>
> Cheers
>
> On 17 December 2013 10:00, Michael Nelson <michael.nelson@xxxxxxxxxxxxx>
> wrote:
>>
>> Woops, replied directly to Lukasz.
>>
>>
>> On Tue, Dec 17, 2013 at 9:54 AM, Michael Nelson
>> <michael.nelson@xxxxxxxxxxxxx> wrote:
>> > On Fri, Dec 13, 2013 at 4:50 PM, Łukasz Czyżykowski
>> > <lukasz.czyzykowski@xxxxxxxxxxxxx> wrote:
>> >> Hi All,
>> >>
>> >> After latest discussion with Stephen about how Stork should work.
>> >
>> > Hey guys. I think the motivation behind stork (ie. make it easy for
>> > different projects within canonical to serve and share static media in
>> > a standard way) is excellent, but I'm wondering if we can build around
>> > existing infrastructure, particularly Openstack swift.
>> >
>> > As a result of some of the work I've been doing lately, I've been
>> > thinking about how we can use openstack swift for serving uploaded and
>> > static media. With that in the back of my head, it seems to me that
>> > Swift will already do a lot of what you seem to want to implement with
>> > stork (particularly, storing and serving static media, and a cli
>> > interface).
>> >
>> > The only extra functionality that I understand stork to have that
>> > swift doesn't have is the convoy combo-loading service - easy to
>> > provide either way.
>> >
>> > Just to frame this discussion, the swift service is basically haproxy
>> > in front of swift-storage backends. When you upload content to a
>> > container, it's stored at least 3 times on different backends and
>> > served via the proxy. Publicly readable containers can be used to
>> > serve public static content. Although rackspace's swift is CDN
>> > enabled, the default swift (and canonistacks) is not.
>> >
>> >
>> > So, with that in mind...
>> >
>> >>
>> >> Stork as a service would serve static, pre-build css and js assets.
>> >
>> > Swift as a service would serve static files (images, pre-built css and
>> > js assets).
>> >
>> >
>> >> Building
>> >> and testing of those assets would be handled by the specific projects,
>> >> outside of Stork, and only the resulting builds would be send to it.
>> >
>> > outside of swift, and only the resulting builds (of static content)
>> > would be uploaded to swift.
>> >
>> >>
>> >> The process could look like that:
>> >>
>> >> In your asset project (all of that could be put into a makefile)
>> >>
>> >> run tests, make sure they pass
>> >> build assets
>> >> push build directory together with the version number to Stork (e.g.: $
>> >> stork upload ubuntuone-assets r567)
>> >
>> > When the project is built by webops, the static media is synced to a
>> > new container for that project, using the revision number 'r567' as
>> > the container name.
>> >
>> >
>> >>
>> >> Stork
>> >>
>> >> puts received files in a directory with the supplied version (e.g.:
>> >> /assets/ubuntuone-assets/r567)
>> >> registers it in the central db? (could be that just the file system dir
>> >> would be enough)
>> >
>> > Puts the received files into a new container named with the supplied
>> > version 'r567'. This registers the revision with swift, so that
>> > listing the containers for the given project (tenant) will now include
>> > the r567 container.
>> >
>> >
>> >> is ready to server them with convoy
>> >
>> > So that's the one thing that swift obviously won't do. I'd be
>> > interested to know how hard it would be to setup a convoy instance
>> > that combines static css/js from a swift container. I don't think it
>> > would be hard, but could be the more interesting part of this.
>> >
>> >>
>> >>
>> >> Stork utility
>> >> To enable other projects to use Stork service, small cli utility would
>> >> be
>> >> needed. It would be responsible for:
>> >>
>> >> Upload new project build
>> >
>> > The swift utility can do this work for us. Possibly using swift's bulk
>> > operations (extract archive), although I'm not sure if that's
>> > available on canonistack swift.
>> >
>> >
>> > http://docs.openstack.org/developer/swift/misc.html#module-swift.common.middleware.bulk
>> >
>> >
>> >> List existing versions for a given project
>> >
>> > The swift utility will list the existing containers for a given
>> > project (tenant).
>> >
>> >
>> >> Delete old versions (?)
>> >
>> > The swift utility can delete old containers.
>> >
>> >>
>> >>
>> >> Problems solved by that approach
>> >>
>> >> The asset building process can use whatever method or tool necessary.
>> >> The building process can depend on bits which doesn't have to be
>> >> installed
>> >> on Stork server (no need for Ruby or NodeJS)
>> >
>> > +1 - sounds good.
>> >
>> >> Each project can manage it's own versions (add, remove, when it's
>> >> needed),
>> >> without the need for Stork re-deployment.
>> >
>> > I think this would be webops managed, but yes, without the need for a
>> > Stork redeployment.
>> >
>> >> Adding new asset project is straightforward and doesn't require
>> >> re-deployments.
>> >
>> > Yep.
>> >
>> >>
>> >>
>> >> Questions
>> >>
>> >> Is it feasible to give to developers access to uploading static assets
>> >> to
>> >> the production server?
>> >
>> > I don't think that would be a good idea, but adding the static assets
>> > to swift as part of the build would make sense.
>> >
>> >> Authentication (and encryption) for the API access between cli tool and
>> >> server?
>> >
>> > So, if we use swift as the storage backend here, as well as the
>> > service serving the public assets, I don't think we'd need a server
>> > (or a separate cli?). All we'd need is convoy serving combined assets
>> > from swift.
>> >
>> >> Should deletion of olf versions be reserved for administrators?
>> >
>> > Yep, I think deleting old containers should be a webops task.
>> >
>> >>
>> >>
>> >> Any input welcomed :)
>> >
>> > Hope that's useful at least for some discussion before moving ahead -
>> > whatever the direction ends up being.
>> >
>> > Cheers,
>> > Michael
>
>
_______________________________________________
Mailing list: https://launchpad.net/~ubunet-discuss
Post to : ubunet-discuss@xxxxxxxxxxxxxxxxxxx
Unsubscribe : https://launchpad.net/~ubunet-discuss
More help : https://help.launchpad.net/ListHelp
--- End Message ---