Blog - Inner Fence » photosleeve https://www.innerfence.com/blog Front- and back-end web engineering with Perl, Catalyst, YUI, S3, lighttpd Wed, 26 Feb 2014 18:27:35 +0000 en-US hourly 1 http://wordpress.org/?v=3.8.1 Got Flash? Got Silverlight? Make a FlashLight! https://www.innerfence.com/blog/2008/07/28/got-flash-got-silverlight-make-a-flashlight/ https://www.innerfence.com/blog/2008/07/28/got-flash-got-silverlight-make-a-flashlight/#comments Mon, 28 Jul 2008 19:45:39 +0000 https://www.innerfence.com/blog/?p=14 With Photosleeve, one of our core ideas is to reduce the work and time required to share photos. One way we do this is by creating smaller sized photos and uploading them first, which makes the initial sharing process and email generation pretty quick.

Doing this is straightforward with our Windows desktop application. But we have always been interested in finding a cross-platform browser-based way of doing the same thing. Our hope was to increase user adoption two ways: by supporting the increasingly-popular Mac platform, and by eliminating the requirement that first time uploaders download and install software.

Some of the things we considered: ActiveX, Firefox Plugin, Java applet, Flash, Adobe AIR, and Silverlight. We wanted a solution that was cross-browser and cross-platform, worked in the browser rather than a separate install, had a reasonable first-time user experience, and didn’t require the user to click through any scary security warnings.

After eliminating those that didn’t meet our criteria, we were down to Flash and Silverlight. To do what we wanted, we needed to be able to have the user select multiple files, read the bytes locally, compute a SHA1 hash, load the bytes into an image and perform manipulations like rotation and scaling, re-encode the resulting images to JPEG, and upload them to our server. Unfortunately, Flash can’t read the bytes locally, and Silverlight can’t do image manipulations or re-encode to JPEG.

At this point we thought we were out of luck. For a while we chatted about this problem with others to see if they had insights, and we’d always jokingly conclude that you’d really need to build a hybrid Flash/Silverlight application to do what we wanted. There would always be jolly consensus that such an idea was too silly to pursue. We even came up with a silly name for the “new” RIA platform: FlashLight.

But as we thought about it more, we found it less and less silly. If we could think of Flash and Silverlight as Javascript libraries instead of monolithic app platforms, there was really no reason that we couldn’t use the functionality from both to achieve our goals. It probably wouldn’t be a happy developer-tool supported experience, but — hey — we like writing code.

So next we tried to figure out if we could use Javascript as the core and call into Flash and Silverlight as needed. It turns out Silverlight provides really great Javascript connectivity. Essentially, you just decorate your types with attributes to make them accessible from Javascript. Flash’s Javascript interaction looked painful at first, but then I found this great add-on from the Flex SDK called FABridge that made it easy.

The plan was to use Silverlight to read the JPEG files, then pass those bytes over to Flash for image processing. But in the middle is Javascript, and Javascript doesn’t deal with binary data very well. Keeping it simple, we decided to base64 encode the bytes and pass them through Javascript as strings. Both Silverlight and Flash have libraries to deal with base64-encoded data.

Once we had the pieces in place, we were able to crank out our web-based uploader in about a week by writing a little bit of C#, and moderate amounts of Javascript and ActionScript. You can see the results on Photosleeve. (You’ll have to sign up for an account and click on Add Photos in the upper right.)

Thinking of Flash and Silverlight as complementary Javascript libraries instead of competing app platforms allowed us to build an app that leveraged Silverlight’s client-side file access and Flash’s client-side image manipulation.

Have other exciting apps been built using this technique? Are there others waiting to be built?

]]>
https://www.innerfence.com/blog/2008/07/28/got-flash-got-silverlight-make-a-flashlight/feed/ 5
Presto! Move content to S3 with no code changes https://www.innerfence.com/blog/2008/05/31/presto-move-content-to-s3-with-no-code-changes/ https://www.innerfence.com/blog/2008/05/31/presto-move-content-to-s3-with-no-code-changes/#comments Sun, 01 Jun 2008 04:31:41 +0000 https://www.innerfence.com/blog/?p=9 Our initial version of Photosleeve stored the full-resolution images locally on our server. Clearly this was a temporary measure, and we’re happy to announce that we’ve moved things to Amazon S3 now. But we did it without changing any of our existing back-end code, which I think is kind of interesting.

We had anticipated the move to S3, so storage was appropriately abstracted in our codebase. My original intention was to swap out “FileStorage” for “S3Storage” and be done. But as I read about S3, I saw that it was important to plan for potential periods of unresponsiveness. For example, the Net::Amazon::S3 CPAN module recommends the use of the “retry” parameter, which will use exponential backoff in the event Amazon cannot be contacted.

Well, my customer just spent several minutes uploading his multi-megabyte full-resolution original image to my server. I don’t want to leave him hanging while I desperately wait for Amazon S3 to respond.

The solution was to leave the back-end code alone. It continued to stash the files someplace locally that our webserver could serve them out as static content. Instead, I wrote a perl daemon that watched the location the back-end dropped the files, and every so often pushed the files up to S3. Only when it was certain the files had been properly transmitted to S3, the daemon would delete the local copies (ok, actually it archived them to another offline location because we’re paranoid and didn’t want to mess up anybody’s photos).

So now the trick was getting our existing “original photo” URLs to serve out local content if available or redirect to S3 if it wasn’t. Well, that should be easy, I just need to find the blog of a rewrite rule wizard, and … Oh, wait. We use lighttpd.

We’re big admirers of lighttpd. With almost no tweaking it handles incredible amounts of traffic with almost no load. Maybe you can get Apache to do that, but we don’t know how and probably don’t have the time to figure it out. With this problem, though, I know Apache’s mod_rewrite would be an easy fix. Well, as easily as Apache rewrite rules ever are, I mean. With lighttpd, we clearly had support for redirects, but we couldn’t express the kind of conditional that we needed to redirect only if the file didn’t exist locally.

Enter mod_magnet. With it and LUA, we were able to write an extremely simple script that does exactly what we want. And — bonus! — I bet just about anybody can understand how it works. (I know rewrite rules are powerful arcane magic, worth learning, but I’ve never found the time and find the syntax completely impenetrable.)

-- /etc/lighttpd/s3.lua (Sample Code)
local filename = lighty.env["physical.path"]
local stat = lighty.stat( filename )
if not stat then
    local static_name = string.match( filename, "static/([^/]+)$" )
    lighty.header["Location"] = "http://s3.photosleeve.com/original/" .. static_name
    return 302
end

Get the filename, ok. Stat the file, sure. If it’s not there, capture a regex match group, and set the location header. Return a 302. Wow. It’s not all on one line, but I sure understand how it works.

Now we just have to hook it up to lighttpd. This does require that LUA support is compiled in. Run `lighttpd -V` and make sure you have the line “+ LUA support”. Kevin Worthington has built lighttpd 1.5.0 r1992 RPMs that have LUA/mod_magnet support compiled in.

# /etc/lighttpd/lighttpd.conf (Sample Code)
server.modules += ( "mod_magnet" )
$HTTP["url"] =~ "^/static/" {
    server.document-root = var.photosleeve-static
    $HTTP["url"] =~ "^/static/[^/]+[.]jpg([?].*)?$" {
        magnet.attract-physical-path-to = ( "/etc/lighttpd/s3.lua" )
    }
}

I specifically only run the LUA code for the precise sort of URLs that I might want to redirect. That should reduce overhead in general. As far as having the redirects in the first place, I don’t think a little less responsiveness is an issue when you’re going to download a multi-megabyte file. And coming through my server also gives me an opportunity to see the request before Amazon. Perhaps later I’ll want to be smart and cache some of the data locally based on traffic trends. Or I could add access control mechanisms (in which case the redirect would contain a signed S3 request). So many cool possibilities. And in the meantime, lighttpd handles the request without bothering my back-end perl processes.

So that’s it. Now our back-end works as it always has, dropping the content locally and generating URLs back to ourselves. But when it’s not looking a sneaky little daemon shifts things around, and the webserver takes care of hiding the mess.

Huh? What does the perl daemon look like? Fodder for another post, I think.

]]>
https://www.innerfence.com/blog/2008/05/31/presto-move-content-to-s3-with-no-code-changes/feed/ 1