Friday, February 14, 2014
the disk is offline because of policy set by an administrator
Friday, July 16, 2010
Microsoft recently announced its next move with Azure "Azure Platform Appliance". The idea is to offer Azure platform as an infrastructure that can be used in private clusters. From my perspective, this is somewhat similar to the idea of private clouds (infrastructure services) that one can deploy using runtimes such as Eucalyptus or Nimbus on local clusters. However, since Azure is not an infrastructure service, it will be more flexible for the users. Unlike pure virtual machines, Azure platform appliance will expose most platform services of Azure as well. In addition, the migration between the private Azure and the public Azure will be seamless as well. Overall, I think the biggset advantage of this approach is the piece of mind that the businesses will have "I am in control of my data and it is local".
Thursday, June 24, 2010
Friday, June 11, 2010
Thursday, March 18, 2010
If a set of data is read by the application but do not get changed, then this set of data can be considered "static" in Twister. Matrix blocks, points in clustering, and web graph in pagerank are all examples of static data.
In MapReduce the communication between map and reduce stage of the computation is a graph not a tree. A map can produce (key,value) pairs that may end up in multiple reducers. In MapReduce the framework does not impose any communication topology or connection topology between map and reduce stages of the computation. It is purely the intermediate keys that determine the communication pattern. For example, in an associative and commutative operations such as sum or histogramming, how the intermediate keys are used to distribute intermediate results among the reduce tasks is not that important.However for operations such as sorting, or matrix operations, one can select the intermediate keys in such a way that specific keys goes to specific reduce tasks. Again this is not defined by the network, but the keys and the key selector functions.
Twister uses pub/sub messaging to implement a MapReduce runtime,especially to support iterative MapReduce computations. Similar to other MapReduce runtimes it gives more focus on processing data while maintain data- process affinity. Map and reduce functions in Twister are long running processes providing distinction between static data and variable data. It supports broadcast, scatter type data distributions and reading data via the local disks. I am not sure how the latter two functions can be handled using MRnet.
Comparing MRnet with MapReduce for data processing applications is an interesting analysis one can do though.
Tuesday, February 09, 2010
Twister is a lightweight MapReduce runtime we have developed by incorporating these enhancements. We have published several scientific papers [1-5] explaining the key concepts and comparing it with other MapReduce implementations such as Hadoop and DryadLINQ. Today we would like to announce its first release.
Key Features of Twister are:
- Distinction on static and variable data
- Configurable long running (cacheable) map/reduce tasks
- Pub/sub messaging based communication/data transfers
- Combine phase to collect all reduce outputs
- Efficient support for Iterative MapReduce computations (extremely faster than Hadoop or DryadLINQ)
- Data access via local disks
- Lightweight (5600 lines of code)
- Tools to manage data
. Jaliya Ekanayake, (Advisor: Geoffrey Fox) Architecture and Performance of Runtime Environments for Data Intensive Scalable Computing, Doctoral Showcase, SuperComputing2009.
. Jaliya Ekanayake, Atilla Soner Balkir, Thilina Gunarathne, Geoffrey Fox, Christophe Poulain, Nelson Araujo, Roger Barga, DryadLINQ for Scientific Analyses, Fifth IEEE International Conference on e-Science (eScience2009), Oxford, UK.
. Jaliya Ekanayake, Xiaohong Qiu, Thilina Gunarathne, Scott Beason, Geoffrey Fox High Performance Parallel Computing with Clouds and Cloud Technologies Technical Report August 25 2009 to appear as Book Chapter.
. Geoffrey Fox, Seung-Hee Bae, Jaliya Ekanayake, Xiaohong Qiu, and Huapeng Yuan, Parallel Data Mining from Multicore to Cloudy Grids, High Performance Computing and Grids workshop, 2008. – An extended version of this paper goes to a book chapter.
. Jaliya Ekanayake, Shrideep Pallickara, Geoffrey Fox, MapReduce for Data Intensive Scientific Analyses, Fourth IEEE International Conference on eScience, 2008, pp.277-284.
Sunday, January 03, 2010
by Özalp Babaoglu , Lorenzo Alvisi , Alessandro Amoreso , Renzo Davoli , Davoli Luigi , Luigi Alberto Giachini
Just found this paper and read it to the end since I noticed some similarities of what they have proposed in 1992 and the current MapReduce programming model and some of the observations are still true for today as well. I will list few of the observations/assumptions they have made showing the similarity of their work and the current MapReduce programming model.
- Large-grain data flow model suitable for high-latency low bandwidth networks
- Only by keeping the communication-computation ratio to reasonable levels can we expect reasonable performance from parallel applications in such system. – We noticed a similar thing with performing parallel computing in Cloud infrastructures [paper]
- Paralex functions mush be “pure” – no side effects
- Node corresponds to computations (functions, procedures, programs) and links indicate flow of typed data – Compare this with Microsoft Dryad’s DAG based programming model.
Some of their performance measures had issues with 16MB data set because the memory they had in one of the machines was only 16 MB. Today we have the luxury of using large memories but our data sets are also grown into petabytes. What they did with NFS is now done in HDFS in Hadoop.