AlgoPARC Publications


As is customary in the algorithms research, most of the publications below are in the alphabetical author order. The few publications, which appear in the venues where authorship is ordered by individual contributions, are marked with ( * ).

Disclaimer: The papers here are provided for prompt dissemination of research results. For some of them the copyright is held by the respective publishers and should not be reproduced or republished elsewhere. The definitive versions are available from the official publishers' websites (via the provided DOI links).

Conference Proceedings

  1. Abstract: The program performance on modern hardware is characterized by locality of reference, that is, it is faster to access data that is close in address space to data that has been accessed recently than data in a random location. This is due to many architectural features including caches, prefetching, virtual address translation and the physical properties of a hard disk drive; attempting to model all the components that constitute the performance of a modern machine is impossible, especially for general algorithm design purposes. What if one could prove an algorithm is asymptotically optimal on all systems that reward locality of reference, no matter how it manifests itself within reasonable limits? We show that this is possible, and that excluding some pathological cases, cache-oblivious algorithms that are asymptotically optimal in the ideal-cache model are asymptotically optimal in any reasonable setting that rewards locality of reference. This is surprising as the cache-oblivious framework envisions a particular architectural model involving blocked memory transfer into a multi-level hierarchy of caches of varying sizes, and was not designed to directly model locality-of-reference correlated performance.

    PDF ARXIV DOI
  2. Abstract: We prove an Ω(lognloglogn)\Omega(\log n \log \log n) lower bound for the span of implementing the nn input, logn\log n-depth FFT circuit (also known as butterfly network) in the nonatomic binary fork-join model. In this model, memory-access synchronizations occur only through fork operations, which spawn two child threads, and join operations, which resume a parent thread when its child threads terminate. Our bound is asymptotically tight for the nonatomic binary fork-join model, which has been of interest of late, due to its conceptual elegance and ability to capture asynchrony. Our bound implies super-logarithmic lower bound in the nonatomic binary fork-join model for implementing the butterfly merging networks used, e.g., in Batcher’s bitonic and odd-even mergesort networks. This lower bound also implies an asymptotic separation result for the atomic and nonatomic versions of the fork-join model, since, as we point out, FFT circuits can be implemented in the atomic binary fork-join model with span equal to their circuit depth.

    PDF DOI
  3. Abstract: When lexicographically sorting strings, it is not always necessary to inspect all symbols. For example, the lexicographical rank of europar amongst the strings eureka, eurasia, and excells only depends on its so called relevant prefix euro. The distinguishing prefix size DD of a set of strings is the number of symbols that actually need to be inspected to establish the lexicographical ordering of all strings. Efficient string sorters should be DD-aware, i.e. their complexity should depend on DD rather than on the total number NN of all symbols in all strings. While there are many DD-aware sorters in the sequential setting, there appear to be no such results in the PRAM model. We propose a framework yielding a DD-aware modification of any existing PRAM string sorter. The derived algorithms are work-optimal with respect to their original counterpart: If the original algorithm requires O(w(N))O(w(N)) work, the derived one requires O(w(D))O(w(D)) work. The execution time increases only by a small factor that is logarithmic in the length of the longest relevant prefix. Our framework universally works for deterministic and randomized algorithms in all variations of the PRAM model, such that future improvements in (DD-unaware) parallel string sorting will directly result in improvements in DD-aware parallel string sorting.

    PDF ARXIV DOI
  4. Abstract:

    Currently, the fastest comparison-based sorting implementation on GPUs is implemented using a parallel pairwise merge sort algorithm (Thrust library). To achieve fast runtimes, the number of threads tt to sort the input of NN elements is fine-tuned experimentally for each generation of Nvidia GPUs in such a way that the number of elements E=N/tE = N/t that each thread accesses in each merging round results in a small (empirically measured) number of shared memory contentions, known as bank conflicts, while balancing the number of global memory accesses and latency-hiding through thread oversubscription/occupancy.

    In this paper, we show that for every choice of E<wE < w, such that EE and ww are co-prime, there exists an input permutation on which every warp of ww threads of the Thrust merge sort is effectively reduced to using at most w/E\lceil w/E \rceil threads due to sequentialization of shared memory accesses due to bank conflicts. Note that this matches the trivial worst-case bound on the loss of parallelism due to memory contentions for any warp accessing wEwE contiguous shared memory locations.

    Our proof is constructive, i.e., we are able to automatically construct such permutation for every value of EE. We also show in practice that such constructed inputs result in up to 50% slowdown, compared to the performance on random inputs, on modern GPU hardware.

    PDF DOI
  5. Abstract: We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible.

    PDF ARXIV DOI
  6. Abstract:

    We study the relationship between memory accesses, bank conflicts, thread multiplicity (also known as over-subscription) and instruction-level parallelism in comparison-based sorting algorithms for Graphics Processing Units (GPUs). We experimentally validate a proposed formula that relates these parameters with asymptotic analysis of the number of memory accesses by an algorithm. Using this formula we analyze and compare several GPU sorting algorithms, identifying key performance bottlenecks in each one of them. Based on this analysis we propose a GPU-efficient multiway mergesort algorithm, GPU-MMS, which minimizes or eliminates these bottlenecks and balances various limiting factors for specific hardware.

    We realize an implementation of GPU-MMS and compare it to sorting algorithm implementations in state-of-the-art GPU libraries on three GPU architectures. Despite these library implementations being highly optimized, we find that GPU-MMS outperforms them by an average of 21%21\% for random integer inputs and 14%14\% for random key-value pairs.

    PDF DOI Related Software
  7. Abstract: We present parallel algorithms to efficiently permute a sorted array into the level-order binary search tree (BST), level-order B-tree (B-tree), and van Emde Boas (vEB) layouts in-place. We analytically determine the complexity of our algorithms and empirically measure their performance. Results indicate that on both CPU and GPU architectures B-tree layouts provide the best query performance. However, when considering the total time to permute the data and to perform a series of search queries, our vEB permutation provides the best performance on the CPU. We show that, given an input of N=500MN=500\mathrm{M} 6464-bit integers, the benefits of query performance (compared to binary search) outweigh the cost of in-place permutation using our algorithms when performing at least 5M5\mathrm{M} queries (1%1\% of NN) and 27M queries (6%6\% of NN), on our CPU and GPU platforms, respectively.

    PDF DOI
  8. Abstract: Visibility graph reconstruction, which asks us to construct a polygon that has a given visibility graph, is a fundamental problem with unknown complexity (although visibility graph recognition is known to be in PSPACE). We show that two classes of uniform step length polygons can be reconstructed efficiently by finding and removing rectangles formed between consecutive convex boundary vertices called tabs. In particular, we give an O(n2m)O(n^2m)-time reconstruction algorithm for orthogonally convex polygons, where nn and mm are the number of vertices and edges in the visibility graph, respectively. We further show that reconstructing a monotone chain of staircases (a histogram) is fixed-parameter tractable, when parameterized on the number of tabs, and polynomially solvable in time O(n2m)O(n^2m) under reasonable alignment restrictions.

    PDF ARXIV DOI
  9. Abstract: Motivated by the asymmetric read and write costs of emerging non-volatile memory technologies, we study lower bounds for the problems of sorting, permuting and multiplying a sparse matrix by a dense vector in the asymmetric external memory model (AEM). Given an AEM with internal (symmetric) memory of size MM, transfers between symmetric and asymmetric memory in blocks of size BB and the ratio ω\omega between write and read costs, we show Ω(min{N,ωNBlogωMBNB})\Omega(\min\{N, \frac{\omega N}{B}\log_{\frac{\omega M}{B}} \frac{N}{B}\}) lower bound for the cost of permuting NN input elements. This lower bound also applies to the problem of sorting NN elements. This proves that the existing sorting algorithms in the AEM model are optimal to within a constant factor for reasonable ranges of parameters NN, MM, BB, and ω\omega. We also show a lower bound of Ω(min{H,ωHBlogωMBNmax{δ,M}})\Omega\left(\min\left\{H,\frac{\omega H}{B} \log_{\frac{\omega M}{B}} \frac{N}{\max\{\delta ,M\}} \right\} \right) for the cost of multiplying an N×NN \times N matrix with at most H=δNH=\delta N non-empty entries by a vector with NN elements.

    PDF DOI
  10. Abstract:

    Let TT be a terrain, and let PP be a set of points (locations) on its surface. An important problem in Geographic Information Science (GIS) is computing the visibility index of a point pp on PP, that is, the number of points in PP that are visible from pp. The total visibility-index problem asks for computing the visibility index of every point in PP. Most applications of this problem involve 2-dimensional terrains represented by a grid of n×nn \times n square cells, where each cell is associated with an elevation value, and PP consists of the center-points of these cells. Current approaches for computing the total visibility-index on such a terrain take at least quadratic time with respect to the number of the terrain cells. While finding a subquadratic solution to this 2D total visibility-index problem is an open problem, surprisingly, no subquadratic solution has been proposed for the one-dimensional (1D) version of the problem; in the 1D problem, the terrain is an xx-monotone polyline, and PP is the set of the polyline vertices.

    We present an O(nlog2n)O(n \log^2 n) algorithm that solves the 1D total visibility-index problem in the RAM model. Our algorithm is based on a geometric dualization technique, which reduces the problem into a set of instances of the red-blue line segment intersection counting problem. We also present a parallel version of this algorithm, which requires O(log2n)O(\log^2 n) time and O(nlog2n)O(n \log^2 n) work in the CREW PRAM model. We implement a naive O(n2)O(n^2) approach and three variations of our algorithm: one employing an existing red-blue line segment intersection algorithm and two new approaches that perform the intersection counting by leveraging features specific to our problem. We present experimental results for both serial and parallel implementations on large synthetic and real-world datasets, using two distinct hardware platforms. Results show that all variants of our algorithm outperform the naive approach by several orders of magnitude on large datasets. Furthermore, we show that our new intersection counting implementations achieve more than 8 times speedup over the existing red-blue line segment intersection algorithm. Our parallel implementation is able to process a terrain of 2242^{24} vertices in under 1 minute using 16 cores, achieving more than 7 times speedup over serial execution.

    PDF DOI Related Software
  11. Abstract: Many-core Graphics Processing Units (GPUs) are being used for general-purpose computing. However, due to architectural features, for many problems it is challenging to design parallel algorithms that exploit the full compute power of GPUs. Among these features is the memory design. Although the issue of coalesced global memory access has been documented and studied extensively, another important architectural feature is the organization of shared memory into banks. The study of how bank conflicts impact algorithm performance has only recently begun to receive attention. In this work we study the predecessor search algorithm and the effects of bank conflicts on its execution time. Via complexity analysis we show that bank conflicts cause significant loss in parallelism for a naive algorithm. We then propose two improved algorithms: one that eliminates bank conflicts altogether but that uses a work inefficient linear search, and one that is work-optimal but that experiences a limited number of bank conflicts. We develop GPU implementations of these algorithms and present experimental results obtained on real-world hardware. These results validate our theoretical analysis of the naive algorithm and allow us to assess the performance of our algorithms in practice. Although both our improved algorithms outperform the naive algorithm, our main experimental finding is that our conflict-limited algorithm provides a larger performance gain.

    PDF DOI
  12. Abstract:

    In this paper, we look at the complexity of designing algorithms without any bank conflicts in the shared memory of Graphical Processing Units (GPUs). Given input of size nn, ww processors and ww memory banks, we study three fundamental problems: sorting, permuting and ww-way partitioning (defined as sorting an input containing exactly n/wn/w copies of every integer in [w][w]).

    We solve sorting in optimal O(nwlogn)O(\frac{n}{w} \log n) time. When nw2n \ge w^2, we solve the partitioning problem optimally in O(n/w)O(n/w) time. We also present a general solution for the partitioning problem which takes O(nwlogn/w3w)O(\frac{n}{w} \log^3_{n/w} w) time. Finally, we solve the permutation problem using a randomized algorithm in O(nwlogloglogn/wn)O(\frac{n}{w} \log\log\log_{n/w} n) time. Our results show evidence that when working with banked memory architectures, there is a separation between these problems and the permutation and partitioning problems are not as easy as simple parallel scanning.

    PDF ARXIV DOI
  13. Abstract:

    We study the problem of list ranking in the parallel external memory (PEM) model. We observe an interesting dual nature for the hardness of the problem due to limited information exchange among the processors about the structure of the list, on the one hand, and its close relationship to the problem of permuting data, which is known to be hard for the external memory models, on the other hand.

    By carefully defining the power of the computational model, we prove a permuting lower bound in the PEM model. Furthermore, we present a stronger Ω(log2N)\Omega(\log^2 N) lower bound for a special variant of the problem and for a specific range of the model parameters, which takes us a step closer toward proving a non-trivial lower bound for the list ranking problem in the bulk-synchronous parallel (BSP) and MapReduce models. Finally, we also present an algorithm that is tight for a larger range of parameters of the model than in prior work.

    PDF ARXIV DOI
  14. Abstract:

    In this paper we study the offline (batched) range minima query (RMQ) problem in the external memory (EM) and cache-oblivious (CO) models. In the static RMQ problem, given an array AA, a query rmqA(i,j)_A(i,j) returns the smallest element in the range A[i,j]A[i,j].

    If BB is the size of the block and mm is the number of blocks that fit in the internal memory in the EM and CO models, we show that QQ range minima queries on an array of size NN can be answered in O(NB+QBlogmQB)=O(scan(N)+sort(Q))O(\frac{N}{B} + \frac{Q}{B}\log_{m} \frac{Q}{B}) = O(\mathrm{scan}(N) + \mathrm{sort}(Q)) I/Os in the CO model and slightly better O(scan(N)+QBlogmmin{QB,NB})O(\mathrm{scan}(N) + \frac{Q}{B} \log_m \min\{\frac{Q}{B}, \frac{N}{B}\}) I/Os in the EM model and linear space in both models. Our cache-oblivious result is new and our external memory result is an improvement of the previously known bound. We also show that the EM bound is tight by proving a matching lower bound. Our lower bound holds even if the queries are presorted in any predefined order.

    In the batched dynamic RMQ problem, the queries must be answered in the presence of the updates (insertions/deletions) to the array. We show that in the EM model we can solve this problem in O(sort(N)+sort(Q)logmNB)O(\mathrm{sort}(N) + \mathrm{sort}(Q)\log_m \frac{N}{B}) I/Os, again improving the best previously known bound.

    PDF DOI
  15. Abstract: In this paper, we perform an empirical evaluation of the Parallel External Memory (PEM) model in the context of geometric problems. In particular, we implement the parallel distribution sweeping framework of Ajwani, Sitchinava and Zeh to solve batched 1-dimensional stabbing max problem. While modern processors consist of sophisticated memory systems (multiple levels of caches, set associativity, TLB, prefetching), we empirically show that algorithms designed in simple models, that focus on minimizing the I/O transfers between shared memory and single level cache, can lead to efficient software on current multicore architectures. Our implementation exhibits significantly fewer accesses to slow DRAM and, therefore, outperforms traditional approaches based on plane sweep and two-way divide and conquer.

    PDF ARXIV DOI
  16. Abstract: We study a simple parallel algorithm for computing matchings in a graph. A variant for unweighted graphs finds a maximal matching using linear expected work and O(log2n)O(\log^2 n) expected running time in the CREW PRAM model. Similar results also apply to External Memory, MapReduce and distributed memory models. In the maximum weight case the algorithm guarantees a 1/2-approximation. Although the parallel execution time is linear for worst case weights, an experimental evaluation indicates good scalabilty on distributed memory machines and on GPUs. Furthermore, the solution quality is very good in practice.

    PDF ARXIV DOI
  17. Abstract: We study the one-dimensional range minimum query (RMQ) problem in the external memory model. We provide the first space-optimal solution to the batched static version of the problem. On an instance with NN elements and QQ queries, our solution takes Θ(sort(N+Q))=Θ(N+QBlogM/BN+QB)\Theta(\mathrm{sort}(N+Q)) = \Theta(\frac{N+Q}{B}\log_{M/B} \frac{N+Q}{B}) I/O complexity and O(N+Q)O(N+Q) space, where MM is the size of the main memory and BB is the block size. This is a factor of O(logM/BN)O(\log_{M/B} N) improvement in space complexity over the previous solutions. We also show that an instance of the batched dynamic RMQ problem with NN updates and QQ queries can be solved in O(N+QBlogM/B2N+QB)O(\frac{N+Q}{B} \log^2_{M/B} \frac{N+Q}{B}) I/O complexity and O(N+Q)O(N+Q) space.

    PDF DOI
  18. Abstract:

    We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge’s sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism.

    The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of NN insertions, deletions, membership queries, and range queries in the optimal O(sortP(N)+K/PB)O(\mathrm{sort}_P(N) + K/PB) parallel I/O complexity, where KK is the size of the output reported in the process and sortP(N)\mathrm{sort}_P(N) is the parallel I/O complexity of sorting NN elements using PP processors.

    PDF DOI
  19. Abstract: We study the MapReduce framework from an algorithmic standpoint, providing a generalization of the previous algorithmic models for MapReduce. We present optimal solutions for the fundamental problems of all-prefix-sums, sorting and multi-searching. Additionally, we design optimal simulations of the the well-established PRAM and BSP models in MapReduce, immediately resulting in optimal solutions to the problems of computing fixed-dimensional linear programming and 2-D and 3-D convex hulls.

    PDF ARXIV DOI
  20. Abstract: The parallel external memory (PEM) model has been used as a basis for the design and analysis of a wide range of algorithms for private-cache multi-core architectures. As a tool for developing geometric algorithms in this model, a parallel version of the I/O-efficient distribution sweeping framework was introduced recently, and a number of algorithms for problems on axis-aligned objects were obtained using this framework. The obtained algorithms were efficient but not optimal. In this paper, we improve the framework to obtain algorithms with the optimal I/O complexity of O(sortP(N)+K/PB)O(\mathrm{sort}_P(N) + K/PB) for a number of problems on axis aligned objects; PP denotes the number of cores/processors, BB denotes the number of elements that fit in a cache line, NN and KK denote the sizes of the input and output, respectively, and sortP(N)\mathrm{sort}_P(N) denotes the I/O complexity of sorting NN items using PP processors in the PEM model. To obtain the above improvement, we present a new one-dimensional batched range counting algorithm on a sorted list of ranges and points that achieves an I/O complexity of O((N+K)/PB)O((N + K)/PB), where KK is the sum of the counts of all the ranges. The key to achieving efficient load balancing among the processors in this algorithm is a new method to count the output without enumerating it, which might be of independent interest.

    PDF DOI
  21. Abstract: We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2-D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems--orthogonal line segment intersection reporting, batched range reporting, and related problems--more effort is required. What distinguishes these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution sweeping technique inspired by its sequential counterpart.

    PDF DOI
  22. Abstract: In this paper, we study parallel I/O efficient graph algorithms in the Parallel External Memory (PEM) model, one o f the private-cache chip multiprocessor (CMP) models. We study the fundamental problem of list ranking which leads to efficient solutions to problems on trees, such as computing lowest common ancestors, tree contraction and expression tree evaluation. We also study the problems of computing the connected and biconnected components of a graph, minimum spanning tree of a connected graph and ear decomposition of a biconnected graph. All our solutions on a PP-processor PEM model provide an optimal speedup of Θ(P)\Theta(P) in parallel I/O complexity and parallel computation time, compared to the single-processor external memory counterparts.

    PDF DOI
  23. Abstract: In this paper, we study parallel algorithms for private-cache chip multiprocessors (CMPs), focusing on methods for foundational problems that are scalable with the number of cores. By focusing on private-cache CMPs, we show that we can design efficient algorithms that need no additional assumptions about the way cores are interconnected, for we assume that all inter-processor communication occurs through the memory hierarchy. We study several fundamental problems, including prefix sums, selection, and sorting, which often form the building blocks of other parallel algorithms. Indeed, we present two sorting algorithms, a distribution sort and a mergesort. Our algorithms are asymptotically optimal in terms of parallel cache accesses and space complexity under reasonable assumptions about the relationships between the number of processors, the size of memory, and the size of cache blocks. In addition, we study sorting lower bounds in a computational model, which we call the parallel external-memory (PEM) model, that formalizes the essential properties of our algorithms for private-cache CMPs.

    PDF DOI
  24. Abstract: We consider the problem of placing a small number of angle guards inside a simple polygon PP so as to provide efficient proofs that any given point is inside PP. Each angle guard views an infinite wedge of the plane, and a point can prove membership in PP if it is inside the wedges for a set of guards whose common intersection contains no points outside the polygon. This model leads to a broad class of new art gallery type problems, which we call “sculpture garden” problems and for which we provide upper and lower bounds. In particular, we show there is a polygon PP such that a “natural” angle-guard vertex placement cannot fully distinguish between points on the inside and outside of PP (even if we place a guard at every vertex of PP), which implies that Steiner-point guards are sometimes necessary. More generally, we show that, for any polygon PP, there is a set of n+2(h1)n+2(h-1) angle guards that solve the sculpture garden problem for PP, where h is the number of holes in PP (so a simple polygon can be defined with n2n-2 guards). In addition, we show that, for any orthogonal polygon PP, the sculpture garden problem can be solved using n2\frac{n}{2} angle guards. We also give an example of a class of simple (non-general-position) polygons that have sculpture garden solutions using O(n)O(\sqrt{n}) guards, and we show this bound is optimal to within a constant factor. Finally, while optimizing the number of guards solving a sculpture garden problem for a particular PP is of unknown complexity, we show how to find in polynomial time a guard placement whose size is within a factor of 22 of the optimal number for any particular polygon.

    PDF DOI
  25. Abstract: This paper extends the reconfigurable shared scan-in architecture (RSSA) to provide additional ability to change values on the scan configuration signals (scan enable signals) during the scan operation on a per-shift basis. We show that the extra flexibility of reconfiguring the scan chains every shift cycle reduces the number of different configurations required by RSSA while keeping test coverage the same. In addition a simpler analysis can be used to construct the scan chains. This is the first paper of its kind that treats the scan enable signal as a test data signal during the scan operation of a test pattern. Results are presented on some ISCAS as well as industrial circuits.

    PDF DOI
  26. Abstract: In this paper, an efficient technique for test data volume reduction based on the shared scan-in (Illinois Scan) architecture and the scan chain reconfiguration (Dynamic Scan) architecture is defined. The composite architecture is created with analysis that relies on the compatibility relation of scan chains. Topological analysis and compatibility analysis are used to maximize gains in test data volume and test application time. The goal of the proposed synthesis procedure is to test all detectable faults in broadcast test mode using minimum scan-chain configurations. As a result, more aggressive sharing of scan inputs can be applied for test data volume and test application time reduction. The experimental results demonstrate the efficiency of the proposed architecture for real-industrial circuits.

    PDF DOI


Journal Articles

  1. Abstract: We present parallel algorithms to efficiently permute a sorted array into the level-order binary search tree (BST), level-order B-tree (B-tree), and van Emde Boas (vEB) layouts in-place. We analytically determine the complexity of our algorithms and empirically measure their performance. When considering the total time to permute the data in-place and to perform a series of search queries, the vEB layout provides the best performance on the CPU. Given an input of NN=537 million 64-bit integers, the benefits of query performance (compared to binary search) outweigh the cost of in-place permutation when performing as few as 0.37% of NN queries. On the GPU, results depend on the particular architecture, with the B-tree and vEB layouts performing the best. The number of queries necessary to reach the break-even point with binary search ranges from 1.3% to 8.9% of NN=1,074 million 32-bit integers.

    PDF DOI
  2. Abstract: Visibility graph reconstruction, which asks us to construct a polygon that has a given visibility graph, is a fundamental problem with unknown complexity (although visibility graph recognition is known to be in PSPACE). As far as we are aware, the only class of orthogonal polygons that are known to have efficient reconstruction algorithms is the class of orthogonal convex fans (staircase polygons) with uniform step lengths. We show that two classes of uniform step length polygons can be reconstructed efficiently by finding and removing rectangles formed between consecutive convex boundary vertices called tabs. In particular, we give an O(n2m)O(n^2m)-time reconstruction algorithm for orthogonally convex polygons, where nn and mm are the number of vertices and edges in the visibility graph, respectively. We further show that reconstructing a monotone chain of staircases (a histogram) is fixed-parameter tractable, when parameterized on the number of tabs, and polynomially solvable in time O(n2m)O(n^2m) under alignment restrictions. As a consequence of our reconstruction techniques, we also get recognition algorithms for visibility graphs of these classes of polygons with the same running times.

    PDF DOI
  3. Abstract: A foreword by the editors of the special issue dedicated to the selected papers from the 26th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2014).

    PDF DOI
  4. Abstract: An overview of computational geometry results in the Parallel External Memory (PEM) model.

    PDF DOI
  5. Abstract: Two factors primarily drive the soaring cost of semiconductor test: the number of test patterns applied to each chip and the time it takes to run each pattern. Typical semiconductor testing for each chip involves a set of 1,000 to 5,000 test patterns. These tests are applied through scan chains that operate at about 25 MHz. Depending on the size of the scan chains on the chip, a set of test patterns can take a few seconds to execute per chip. It's easy to see that even a small decrease in either the number of patterns or the time to execute them can quickly add up to big savings across millions of fabricated chips. This potential savings forms the basis for dynamic scan, a new approach to the well-established scan test methodology. The authors initial studies indicate that dynamic scan could easily reduce the time spent applying test patterns by 40 percent. A more theoretical analysis shows a potential savings of as much as 80 percent.

    PDF DOI


Book Chapters

  1. Abstract:

    PDF DOI


Referreed Workshops (without formally published proceedings)

  1. N. Sitchinava, D. Strash: "Reconstructing a unit-length orthogonally convex polygon from its visibility graph". European Workshop on Computational Geometry (EuroCG '16). 2016.
  2. P. Afshani, N. Sitchinava: "I/O-efficient range minima queries". 6th Workshop on Massive Data Algorithmics (MASSIVE '14). 2014.
  3. N. Sitchinava, V. Weichert: "Provably efficient GPU algorithms". 5th Workshop on Massive Data Algorithmics (MASSIVE '13). 2013.
  4. L. Arge, J. Fischer, P. Sanders, N. Sitchinava: "On (dynamic) range minimum queries in external memory". 5th Workshop on Massive Data Algorithmics (MASSIVE '13). 2013.
  5. D. Ajwani, N. Sitchinava, N. Zeh: "I/O-optimal distribution sweeping on private-cache chip multiprocessors". 3rd Workshop on Massive Data Algorithmics (MASSIVE '11). 2011.
  6. D. Ajwani, N. Sitchinava, N. Zeh: "Geometric algorithms for private-cache chip multiprocessors". 2nd Workshop on Massive Data Algorithmics (MASSIVE '10). 2010.
  7. L. Arge, M.T. Goodrich, N. Sitchinava: "Parallel external memory model". Workshop on Theory and Many-Cores (T&MC). 2009.
  8. * N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gizdarski, T.W. Williams: "Dynamically reconfigurable shared scan-in architecture". IEEE International Test Synthesis Workshop (ITSW '04). 2004.
  9. * N. Sitchinava, S. Samaranayake, R. Kapur, F. Neuveux, E. Gizdarski, T.W. Williams, D. Spielman: "A segment identification algorithms for a dynamic scan architecture". IEEE International Test Synthesis Workshop (ITSW '03). 2003.
  10. * N. Sitchinava, S. Samaranayake, R. Kapur, M. Amin, T.W. Williams: "DFT - ATE solution to lower the cost of test". IEEE Workshop on Test Resource Partitioning. 2001.