Copyright ©  2012  Mehmet Balman

Research Statement, 2012 / Mehmet Balman

My research background spans the fields of distributed system, data-intensive computing, high-speed networking, data scheduling, and resource management. My current work particularly deals with performance problems in high-bandwidth networks, efficient data transfer mechanisms and data streaming, high-performance network protocols, network virtualization, software defined networking, and data transfer scheduling for large-scale applications. I am targeting the open problems in data management and networking in collaborative systems. Specific areas of interest include dynamic resource provisioning, end-to-end processing of data, autonomic resource coordination and scheduling, novel data-access layers for transparent data services, and data discovery services for science.

Accessing and managing large amount of data is one of the major difficulties both in science and business applications. In addition to increasing data volumes, future scientific collaborations require cooperative work at the extreme scale. Traditional techniques of leaving the burden on the user of moving/storing data are not viable options any longer. We still lack a good understanding of efficiently using the network infrastructure to address increasing data requirements. Resource management and scheduling problems are gaining importance due to current developments in utility computing and high interest in Cloud infrastructure. We require complex middleware to orchestrate storage, network, and compute resources, and to manage end-to-end processing of data.

My next major focus is to understand how current technological developments will revolutionize application design and data management middleware over the next 5-10 years. Within this scope, I explore challenges in data-intensive distributed computing to outline the next-generation network-aware data management middleware. I am studying the cutting edge network performance and behavior, and I am promoting that we need new data access layers between the application and the network stack to fully benefit from high-speed networks. Furthermore, I explore novel mechanisms and intelligent data management systems to envision future design principles of network virtualization and resource sharing. My goal is to establish a theoretical framework for autonomous resource provisioning and scheduling for next generation dynamic networks.

In addition to independent research, most of my work is performed in collaboration with other experts in different application domains. I am always interested in finding application partners to implement real-life solutions. In my past work, I was in close interaction with developers, users, domain scientists, and system engineers. I have closely worked with the Earth System Grid Federation (ESGF) team for data dissemination in climate research. Other areas that I am currently exploring are biological data management and radio astronomy. I plan to seek new partnership with network experts and scientific computing centers for future prototyping and experimentation opportunities.

Past Accomplishments

100Gbps and Beyond:
  I was deeply involved in the initial evaluation of ESnet’s (Energy Sciences Network) 100Gbps network, and co-authored one of the first research papers for this work [3]. Meanwhile, I have performed experiments about the feasibility of RDMA (Remote Direct Memory Access) over Ethernet (RoCE) in wide area.  Dealing with many files imposes extra bookkeeping overhead, especially over high latency. I have developed a new approach, called Memory-mapped zero-copy Network Channel (MemzNet), which provides dynamic data channel management, out-of-order and asynchronous data processing. In my approach, data is aggregated and divided into simple data blocks, in contrast to the current file-based methods. Blocks are tagged and streamed over the network [2]. I think of this work as a no-FTP approach to transfer data both for bulk data replication and data streaming dealing with a set of many files simultaneously. I have used my tool (MemzNet) for the SC11 100Gbps live-demo, in which data CMIP3 climate data was staged into the memory of computing nodes across the country at ANL (Argonne) and ORNL (Oak Ridge) from NERSC (Berkeley). This is a pioneering work that demonstrates the performance of network applications over 100Gbps [3]. A major obstacle in use of high-bandwidth networks is the limitation in host system resources. Using 100Gpbs efficiently is a feasible goal for the future, but careful evaluation is necessary to avoid host bottlenecks in end-systems and to eliminate the effects of using multiple NICs in multi-core environments. MemzNet is an example of a successful application design to take advantage of high-bandwidth networks for high throughput and low overhead data movements.

Advance Resource Provisioning: 
  Many scientific analysis programs need periodic and time sensitive movement of large-scale datasets. One of the open problems is the dearth of robust, economic data scheduling models for time-sensitive data movements in future research infrastructures. Delivering data movement between collaborating facilities as-a-service, where users can plan ahead and schedule their request in advance, is highly desirable. My proposed data-scheduling model provides coordination between data transfer nodes and network reservation systems.  I have always been interested in theoretical issues and innovative approaches for complex scheduling problems. Generally speaking, scheduling with time and resource conflicts is NP-hard. I have introduced a practical heuristic, inspired from the Nobel Prize (2012) winning Gale-Shapley algorithm. I have developed an online scheduling algorithm that generates near optimum results to organize multiple requests on-the-fly, while satisfying users’ time and resource constraints [1].  My work gives a detailed analysis for online scheduling and resource assignment problems that are quite important in Cloud computing. Furthermore, this work gives a practical solution applicable in Software Defined Networking (SDN).

Flexible Network Reservations:
  Many scientific research networks provide on-demand virtual circuits for large-scale data replication, high performance remote data analysis and visualization. Advance network reservation systems provide guaranteed bandwidth and predictable performance, but they are designed to support rigid reservation requests where time window and bandwidth are fixed. This leads to trial-error sequence if the requested reservation is not granted. My approach is to let users specify flexible reservations in terms of a desired time period and data volume to be transferred, and let the system find the best solution within these constraints. This is analogous to the scenario in reserving airline flights in which a number of options and lower fares are offered if the traveler can be flexible with their possible travel dates and time. The difficulty is that this problem falls into a class of dynamic network problems, since the bandwidth value for every link is time dependent. My expertise in graph theory helped me to analyze the network capacity graph by dividing the search interval into time windows. I designed a new technique to better examine the time-dependent topology and minimize the number of calculations in an effective manner. I have developed a fast and scalable algorithm that finds possible reservation options [8].  In addition to advance network reservations, my approach is also applicable in other problems, including resource matching and scheduling over time-dependent complex graphs.

Failure-Awareness and Request Aggregation:
  During my graduate studies, I was deeply involved in sharing and dissemination of large datasets across distributed resources. I worked closely with domain scientists (mechanical engineering, high-energy physics, coastal studies) in state-wide collaborative projects in Louisiana. My research emphasis was on data placement scheduling to mitigate the data bottleneck in peta-scale systems.  I have presented a failure-aware data placement methodology with early error detection, error classification, and failure recovery [7].  In addition to failure-awareness, I have developed reordering and aggregation techniques that resulted in major performance improvement in scheduling of wide-area data movement operations [5].  Multiple requests are combined and processed as a single transfer operation. Without this optimization, each transfer operation requires separate connection setup and protocol initialization. As the number of requests for small amounts of data increases, these small overheads add up and become a significant cost. My approach extends fundamental data handling and data management techniques to distributed computing.  Similarly, I have applied prefetching and caching techniques in another project to aggregate remote I/O operations in filesystem drivers [6]. That method eventually improved user’s overall data access performance by 3-4 orders of magnitude.

Dynamic Adaptation in Data Transfers:
  Network transfers are affected by host performance, underutilized capacity in end-systems, system overheads, inadequate protocol tuning, and the network latency between end-nodes. Dynamic tuning and adaptive transfer optimization are important for high-performance data movements. In my approach [9], performance is gradually improved and brought to the optimum level, without the burden of external profiles and complex optimization procedures. The elegance of the method is that it does not need any prior knowledge of the environment. My dynamic tuning algorithm brings a practical solution to minimize the effect of network latency by providing a high-quality tuning for best system and network utilization.

Mesh Refinement:
  Since the beginning of my academic studies, I was fascinated with complex, large-scale problems in distributed and parallel environments. In order to handle very large real-life non-uniform mesh structures, we require parallel processing, synchronization, and appropriate data structures to handle distributed data among many processing elements. I presented a new framework implemented with Message-Passing Interface (MPI), and I devised a scalable algorithm [10] for parallel tetrahedral mesh refinement using longest-edge bisection technique. In my approach, data in each node is processed locally, and results are synchronized using a specialized data structure to handle the overall refinement process.

Future Research Plans

Energy Sciences Network and Internet2 are working together to bring 100Gbps network. 100Gbps is beyond the capacity of today’s commodity machines, since we need substantial amount of processing power and involvement of multiple cores to fill a 40Gbps or 100Gbps network. As the network bandwidth gets bigger, the overhead and performance related issues would have higher impact. Current efforts mainly focus on improving performance in file transfers or custom applications such as remote visualization. I particularly target end-to-end processing of data in general. I evaluate future high-bandwidth networks from the applications’ perspective. This high-bandwidth will bring new challenges. We cannot expect every application to tune and improve every time we change the link technology or speed. Instead of explicit improvements in every application as we keep changing the underlying technology, we require novel data movement mechanisms and abstract layers in between the existing network layers for adaptive and automatic tuning. Providing fast network access will benefit many scientific applications in general. As multicore systems require new programming models and new techniques, I believe that future data access and data management systems will also require novel methodologies instead of incremental improvements in current tools. I am studying novel techniques for high-bandwidth networks in order to eliminate system overheads and bring anticipated high performance to the application layer along with ease-of-use by end-users.

As scientific collaborations and data sharing increase, coordination and sharing of resources among users are becoming challenging issues. At present, resources are organized mostly in a dedicated fashion: dedicated data servers and storage elements, the impractical yearlong network reservations, etc., which are not collaboration-friendly, and definitely not efficient in terms of usage. There are many studies in resource scheduling that capture probably decades of experience and techniques. However, as they do not consider how people might deal with compute/data/network resources in collaborative science, these known techniques are not very useful, and will not scale in the future collaborative environments. Collaboration also includes human aspects in use of the technology. An important drawback in many of the proposed solutions in the literature is that they depend on user constraints as input. They put extra burden on users, and expect users to be truthful and provide their true requirements, which will never happen without lucrative incentives. I believe that this is one of the reasons why QoS systems and co-scheduling models are not widely deployed and used in general. The question I target is whether we can learn patterns and develop policies automatically for resource provisioning, based on some automated, rule-based analysis of these patterns. Future systems should make intelligent decisions without strictly depending on user intervention. I collaborate with other researchers to identify requirements of future scientific networking and data management. My research plan is to design an autonomic resource sharing system that can adaptively learn user patterns and update/manage the systems accordingly.

Professional Service

I see professional service as a good opportunity to gain recognition in the research community, extend my network, follow active research projects by others, and most importantly find potential collaborators. I have been in the technical program committee of several workshops and conferences. I actively volunteer for reviewing many journal articles.  Recently, I have co-organized workshops on Network-aware Data Management (NDM) for the last two years, co-located with the SuperComputing conference. The goal of the workshop is to discuss emerging trends and create new collaborations between network and data management communities. These workshops gained significant attention and we have received very positive feedbacks from the audience. I have also co-organized two panel sessions on Open Problems and New Directions in Network-aware Data Management.  The keynote and panel speakers we have hosted include Ian Foster (Argonne National Laboratory & University of Chicago), Karsten Schwan (Georgia Institute of Technology), Richard Carlson (DoE ASCR), Daniel S. Katz (NSF Cyberinfrastructure), Dhabaleswar Panda (Ohio State University), and many other well-known experts from academia and government labs. Based on workshop experience, I have co-authored a recent paper [4] that evaluates emerging trends, discusses open problems, and articulates my perspective in network-aware data management. 


[1]    M. Balman, Advance Resource Provisioning in Bulk Data Scheduling. In Proceedings of the 27th IEEE International Conference on Advanced Information Networking and Applications (AINA), 2013.

[2]    M. Balman, Streaming Exascale Data over 100Gbps Networks, IEEE Computing Now, Oct 2012.

[3]    M. Balman, E. Pouyoul, Y. Yao, E. W. Bethel, B. Loring, Prabhat, J. Shalf, A. Sim, B. L. Tierney, Experiences with 100Gbps Network Applications. In Proc. of the 5th Int. workshop on Data-Intensive Distributed Computing, in conjunction with HPDC’12, 2012.

[4]    M. Balman, S. Byna, Open Problems in network-aware data management in exa-scale computing and terabit networking era. In Proc. of the Int. workshop on Network-aware Data Management, in conjunction with SC11, 2011.

[5]    T. Kosar, M. Balman, E. Yildirim, S. Kulasekaran, B. Ross, Stork Data Scheduler: Mitigating the Data Bottleneck in e-Science, in Philosophical Transactions of the Royal Society A, Vol.369 (2011), pp. 3254-3267. 

[6]    T. Kosar, I. Akturk, M. Balman, X. Wang, PetaShare: A Reliable, Efficient, and Transparent Distributed Storage Management System, Scientific Programming Journal, Sci. Program. 19, 1 (2011), 27-43

[7]    M. Balman and T. Kosar, Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling, International Journal of Autonomic Computing 2010 - Vol. 1, No.4 pp. 425 - 446, DOI: 10.1504/IJAC.2010.037516

[8]    M. Balman, E. Chaniotakis, A. Shoshani, A. Sim, A Flexible Reservation Algorithm for Advance Network Provisioning. In Proceedings of the ACM/IEEE International Conference for High Performance Computing, Networking, Storage and Analysis (SC10), 2010.

[9]    M. Balman,T. Kosar, Dynamic Adaptation of Parallelism Level in Data Transfer Scheduling. In Proc. of Int. Workshop on Adaptive Systems in Heterogeneous Environments, in conjunction IEEE CISIS'09 and IEEE ARES'09, 2009.

[10]    M. Balman, Tetrahedral Mesh Refinement in Distributed Environments, in Proceedings of IEEE International Conference on Parallel Processing Workshops, IEEE Computer Society 2006, pp. 497-504, ISBN:0-7695-2637-3.

Copyright ©  2012  Mehmet Balman