I was really looking forward to this conference based on my experience last year, with the likes of Jeff Dean and Marissa Mayer presenting. When I saw the original agenda for Saturday I was excited to see they expanded the number of talks at the expense of having to make a decision about which presentation to attend, a task at which I often feel I failed.
When I arrived, late, I was surprised to see they decided to change the format and rather than have two tracks for each session, the presentations were shorten so everyone could attend every talk. I’m not sure how much notice the presenters were given of this decision because a number had presentations well exceeding the diminished time frame. As a conference presenter myself, I know that a well re-hearsed presentation can be difficult to amend on the fly.
Communicating Like Nemo
I’m not sure what I was supposed to get out of this presentation. I understand that working under water places significant constraints on connectivity, bandwidth and other factors but I didn’t feel like I really learned much about how these are being overcome. I did get to brush up on my PADI hand signals – it’s been awhile since I dove last.
Since I arrived to the conference a bit late I was seated towards the rear of the room for the first two talks. The presenter chose to use the whiteboard as a primary presentation medium, which as a friend said demonstrates he really has confidence and knows his shit, but for me was unfortunate since I could barely hear the presentation nor see the board. Since my mind was already deep in debugging objc’s
forwardInvocation: I chose to leave the room and finish my work, in which I’m happy to report success. Afterwards, I learned this talk was pretty good if you could see and hear.
Fantastic. This was the quality and topic of talk I was looking forward to seeing. Chapel is a new programming language coming out of Cray which:
supports a multithreaded parallel programming model at a high level by supporting abstractions for data parallelism, task parallelism, and nested parallelism. It supports optimization for the locality of data and computation in the program via abstractions for data distribution and data-driven placement of subcomputations.
It supports constructs within the language to create and execute arbitrarily nested tasks via a
begin keyword and join on the results of those calculations via
sync. Furthermore, it can execute the same tasks in parallel by using
coforall operations without changing the underlying code. This is an improvement over the current state-of-the-art MPI programming which forces the developer to have intimate knowledge about both the high level logic of the application and the distributed runtime, creating difficult to maintain code. Chapel also supports synchronization of tasks in a similar, data-driven manner.
In addition to the task and data parallelism, Chapel supports the idea of locales which can be CPUs, cores or separate machines entirely. Through lower level constructs such as
on, the developer can specify where tasks should run and how resources are accessed and utilized.
This is pretty exciting. I like the approach of high-level, don’t-worry-about-it language features with the ability to dig deeper if necessary. Unfortunately, I’m not sure this language will ever see a line of code from the likes of me given its intended problem domain and hardware.
The scientific community is plagued by a number of issues regarding research such as a myriad of file formats, no central repository for data and limited data sharing and analysis. Carmen addresses some of these concerns through the implementation of a domain-specific cloud architecture. In many ways it looks and feels like EC2+AWS but it addresses the specific needs of the science community, such as the security model for collaboration and the cost structure of using the commercial clouds given the cost for data storage would be extraordinarily high.
In order to carry out experiments or analysis, data and services are uploaded to the cloud and then a workflow is created to integrate, via SOAP, the binary services (WARs, executables). During the runtime of the analysis, if additional services are required (based on numerous metrics) they are automatically created and deployed. This sounds a lot like a combination of EC2 and AppEngine.
The presenter also showed a photo of an exposed human brain from an operation – unexpected at a computer conference.
This was one of those talks that was, for me, better for the bits of take-away material than the actual product being presented. For example, when a node reaches storage capacity in GIGA+ it splits some of the data elsewhere. In order to achieve limited-to-no locking, each node keeps a table of where it sent data so every client doesn’t have to be updated right now but instead can be lazily updated. If a client makes a request to the old node because of a stale view of the world, the request is forwarded, ala HTTP, and the client updated. I also learned about extendible hashing and bitmap management of partition locations.
This could have been a more interesting talk but a lot of assumptions about the operating environment were made making it more or less unrealistic at the moment, such as: the network is always reliable, the configuration is static, no offline disconnected mode.
Google Maps Mobile
A light, but interesting overview of the problems facing mobile development: lots of OSs, form factors, bandwidth, available storage, security, localization, …
Wikipedia on Erlang
This talk should have replaced Erlang with DHT in the title for it was really about replacing a typical large-scale MySQL cluster of databases with a DHT+transactions to implement a clone of Wikipedia. As far as I could tell, Erlang was used a pseudo-message bus with more development in Java integrated with Erlang via JInterface. In the end, this looked like a similar implementation of SimpledDB or any of the other key-value stores.
NetWorkSpaces is a Python-implemented (twisted and Zope) tuplespace integrated with R to provide parallel computation for the otherwise serial computational model of R. Given the almost commodity-like tuplespace environment, it seems the real advantage here is the integration with R and not the tuplespace itself (again see SimpleDB, …), though the presenter pointed out NWS would run on any platform which runs Python. The typical deployment is small, around 12-16 nodes, because that’s a normal installation more than a limitation of the architecture.
Shared Transactional Memory
A good, general overview of the problems facing language and hardware (Azul, Sun Rock) developers and engineers as they attempt to address transactional memory. I thought the presenter did a nice job of demonstrating the issues through code examples but as with any [H|S]TM presentation, it was light on answers and heavy on “that needs to be figured out”.
I was glad I went for the Chapel talk and enjoyed the Carmen, GIGA+ and STM talks.
One of the themes I took away was while cloud computing has become mainstream there’s a need to add the domain-specific abstraction on top of it, not too dissimilar really to the ever-growing popularity of DSLs implemented in mainstream languages.
I liked last year’s approach better: fewer talks, more time for each presentation, more polished speakers and more technical content; I also liked the move to Seattle from Bellevue.