Computer Science Department Masters's Papers
To obtain a copy of a Master's paper listed here, contact the author directly.
| Name | Date | |
|---|---|---|
| Lamont Samuels | 11 March 2013 | |
| Title: Strand Communication in Diderot | ||
Diderot is a parallel domain-specific language designed for biomedical image-analysis and visualization algorithms that provides a high-level mathematical programming model. Diderot allows domain experts to implement familiar image analysis and visualization algorithms directly in terms of tensors, tensor fields, and tensor operations, using the same mathematical notation that would be used in vector and tensor calculus. These operations are executed as parallel independent computations. We model these independent computations as autonomous lightweight threads called strands. The current design of Diderot limits strand creation to initialization time and does not provide any mechanisms for communicating between strands. For algorithms, such as particle systems, where strands are used to explore the image space, it is useful to be able to dynamically create new strands instantly and share data between each other. We present a strand communication system with two mechanisms: a spatial mechanism that retrieves nearby strands based on their geometric position in space, and a global mechanism for global computations (i.e., reductions) over sets of strands. We also provide a new mechanism for dynamically allocating new strands. We show through experiments that our strand communication system provides high-performance for our sequential runtime. | ||
| Zhihao Lou | 15 November 2012 | |
| Title: The 100-Processor Barrier in Parallel Simulated Annealing Algorithms | ||
The simulated annealing algorithm is a powerful algorithm for global optimization problems. However, there seems to exist a barrier for its parallelization to achieve a speedup over 100. Consider that it is now commonplace for departments to have clusters with a few hundred nodes while larger universities and institutions to have access to even larger supercomputers, it is a very disappointing situation. This paper reviewed the different techniques used in parallelizing the simulated annealing algorithm, their performance and its measurement, and the testing problems used in literature to date. This paper also discuss the possible directions in which to move forward. | ||
| Sven Auhagen | 13 April 2012 | |
| Title: Hybrid Chunking | ||
Manticore is a compiler for the parallel programming language Parallel ML (PML), which is based on Standard ML. PML supports parallelism using an implicitly-threaded model where the programmer annotates parts of the program that should be run in parallel. For example, in a recursive divide-and-conquer algorithm, the programmer could use a parallel tuple annotation to specify that recursive calls should be done in parallel. The Manticore compiler replaces the parallel tuples with futures that encapsulate the potential parallel work. At runtime, work stealing is used to balance the parallel workload. For many programs, this approach has the problem that it introduces too much fine-grained parallelism, which means that the overhead of managing the futures comes to dominate the execution time. The solution to this problem is to increase the granularity of work, but not so much that there is not sufficient parallelism to keep the processors busy. In this paper, we introduce an automatic mechanism for chunking together fine-grain parallel computations into their sequential equivalents. Our approach, which is called hybrid chunking, uses a combination of compile-time analysis and runtime instrumentation and decision making to identify when a recursive parallel computation should be run as a recursive sequential computation. We present a cost model that we use in the static analysis to obtain cost of expressions and size information of data structures. We also describe how this information is used at runtime to switch the execution of a program from parallel to sequential when we determine it will run faster sequentially. We show that we can obtain good speedups on recursive divide-and-conquer programs, with our hybrid chunking approach. | ||
| Denis Pankratov | 07 March 2012 | |
| Title: Direct Sum Questions in Classical Communication Complexity | ||
In 1988, Karchmer and Wigderson generalized Yao’s two-party communication model of functions to relations and showed a remarkable connection of this model to the Boolean circuit model. A few years later, continuing this line of work, Karchmer, Raz, and Wigderson proposed a program to separate NC from P through direct-sum-type inequalities in communication complexity. This spurred the study of this fundamental question in communication complexity: given problems A and B, is it easier to solve A and B together than separately? It seems that we are still far from separating NC from P; however, during the last 20 years of research our knowledge of the behavior of different communication complexity measures with respect to the direct sum has seen a lot of progress. We survey some of these results in this paper and make a new observation about the recent approach to the direct-sum question in the randomized setting. | ||
| Sobhan Naderi Parizi | 10 February 2012 | |
| Title: Image Classification with Reconfigurable Spatial Structures | ||
We propose a new latent variable model for scene recognition. Our approach represents a scene as a collection of region models (“parts”) arranged in a reconfigurable pattern. We partition an image into a pre-defined set of regions and use a latent variable to specify which region model is assigned to each image region. In our current implementation we use a bag of words representation to capture the appearance of an image region. The resulting method generalizes a spatial bag of words approach that relies on a fixed model for the bag of words in each image region. Our models can be trained using both generative and discriminative methods. In the generative setting we use the Expectation-Maximization (EM) algorithm to estimate model parameters from a collection of images with category labels. In the discriminative setting we use a latent structural SVM (LSSVM). We note that LSSVMs can be very sensitive to initialization and demonstrate that generative training with EM provides a good initialization for discriminative training with LSSVM. | ||
| Pooya Hatami | 19 January 2012 | |
| Title: Testing Properties of Boolean Functions: Lower Bounds on Testing Fourier Degree | ||
We consider the problem of deciding whether a given object has a given property or it is far from any object with the property, referred to as property testing. We focus on the case where the objects are Boolean functions, and we survey some of the previously known results about testing for properties such as the number of relevant variables and Fourier degree of a Boolean function. We present the recently developed technique for proving lower bounds in property testing, which is based on showing connections between property testing and communication complexity [13]. For the problem of testing whether a Boolean function has Fourier degree ≤ k or it is ǫ-far from any Boolean function with Fourier degree ≤ k, we improve the known lower bound of Ω(k) [13, 18], to Ω(k/√ǫ), using reductions from communication complexity. | ||
| Maia Fraser | 10 November 2011 | |
| Title: Structural Observations on Neural Networks for Hierarchically Derived Reproducing Kernels | ||
This thesis addresses structural aspects of neural networks (NN's) used for producing hierarchically derived reproducing kernels in the CBCL model. It also shows that convolution neural networks (CNN's) may be viewed as a special case of a generalized model we call the extended CBCL model. The main results of the thesis use Hilbert space geometry to show that for NN's in the extended CBCL model, the use of multiple layers and/or non-linear neural responses does not broaden the class of kernels computed, rather at issue seems to be the amount of effort involved in specializing the NN to its intended function. More precisely, the thesis shows how various design parameters of the NN (transformation sets, neural responses, templates, depth) may be traded off against each other without changing the map computed by the NN. | ||
| Zhao Zhang | 02 June 2011 | |
| Title: Bridging the Gaps between Many-task Computing and Supercomputers | ||
Many Task Computing, an emerging programming paradigm on supercomputers, embraces many applications in such domains as biology, economics, and statistics, as well as data intensive computations and uncertainty quantification. Its high inter-task parallelism and intense data processing features place new challenges on the existing hardware-software stack on supercomputers. Those new challenges include resource provisioning, job scheduling, load balancing, data management, and resiliency. In this paper, we identify Many-Task Computing middleware gaps between the applications and supercomputer's hardware-software stack by examining their characteristics. Based on this analysis, we propose various solutions for each of the gaps. By evaluating the solutions, we show the pros and cons of each solution and the respective conditions. We also conducted a preliminary study of real applications, and we will show how the proposed solutions enhance the efficiency and scalability of the well known domain science applications. | ||
| Tim Armstrong | 24 May 2011 | |
| Title: Integrating Task Parallelism into the Python Programming Language | ||
One of the major current challenges in computer science is providing programming models and abstractions that allow efficient and correct parallel programs to be expressed. The challenge occurs in a number of domains, with different scales of paralellism required. In the consumer space, eight or ten cores will soon be common on desktop computers, and the performance of software which cannot take full advantage of many cores will lag. In the scientific space, scientific investigations increasingly require computationally intensive simulation and data crunching on computer clusters. Coordination languages are one way in which the problem of writing parallel programs can be made more manageable. A coordination language provides a specialized way to specify the communication and synchronization between the different components of a parallel or distributed application. In many cases, the parallelism in a program is naturally expressed as task parallelism, where independent tasks which produce and consume data can run in parallel. This paper proposes and demonstrates the feasibility of embedding a task-parallel coordination language, PyDFlow, into the general-purpose scripting language Python. PyDFlow currently supports loosely coupled workflows, where tasks communicate through input and output files, but the language is designed to be extensible beyond this. PyDFlow represents parallel computations as graphs of tasks linked by data dependencies and provides a programming model somewhat akin to functional languages, where computations are specified by constructing expressions by composing functions. This gives natural syntax and semantics for expressing a parallel or distributed computation that integrates well with the Python language, and allows execution techniques from the functional language literature to be applied. | ||
| Matthew Rocklin | 18 May 2011 | |
| Title: Uncertainty Quantification and Sensitivity Analysis | ||
Dynamical systems describe the time evolution of a point in a space of potential states. If knowledge about the initial point is uncertain then the dynamical system evolves a probability distribution forward rather than a single point. Tracking this distribution is challenging and approximate methods are often necessary. We study uncertainty quantification of short-term forecasts in dynamical systems. We compare sampling (Monte Carlo) against adjoint methods on low and high dimensional systems. For sampling we particularly consider the formation of sensible initial distributions, so-called ”errors of the day”, in a low-rank attractor given model dynamics and expectation values distributed in time. Adjoint methods are shown to be a cheaper alternative when close to the true solution which can also be used to pinpoint optimal observations to reduce targeted uncertainties in the final distribution. For a motivating example we consider uncertainty in windspeed of short-term forecasts to aid with advance management of resources on the power grid. | ||
| Pratik Worah | 05 May 2011 | |
| Title: Some Problems related to Proof Complexity and Optimization | ||
Lovasz and Schjriver introduced several lift and project methods for $0$-$1$ integer programs, now collectively known as Lovasz-Schjriver ($LS$) hierarchies. Several lower bounds have since been proven for the rank/depth of various linear programming relaxations in the $LS$ hierarchy. In this thesis we study lower bounds in the more general $LS_*$ hierarchy, which was first investigated by Grigoriev et al. In particular we show that the $PHP$ inequalities have a $\log_2 n$ depth refutation in $LS_*$ and we prove that this is tight. We also extend the strong rank lower bounds for Max-cut, Sparsest cut etc studied in Charikar et al. for Sherali-Adams to weaker rank lower bounds in $LS_*$ hierarchy. | ||
| Allan Espinosa | 22 February 2011 | |
| Title: Cybershake on opportunistic cyberinfrastructures | ||
Continued improvements in provisioning capabilities of cyberinfrastructures increase the computational enablement of science such as the Cybershake application. Using workflow tools such as Swift and Pegasus, we observed how jobs are scheduled to exploit an expanded set of resources available in TeraGrid and Open Science Grid (OSG). Our tests on OSG gave 2000~CPUs in a day on an opportunistic basis. This was well within the time-to-solution requirements of the Cybershake application. However, there were potential bottlenecks in the data requirements of the application. In this paper, we characterize how the time-to-solution budget is spent by looking at the running time and file transfer time of the post-processing computation. | ||
| Negarsadat Mirsattari | 15 February 2011 | |
| Title: Properties of Degree Sequences of k-Uniform Hypergraphs | ||
A "k-uniform hypergraph" H is a pair H = (V,E), where V is a set of elements, each called a "vertex", and E = {S_1, S_2, ..., S_m } is a collection of distinct k-subsets of V, each called a "hyperedge". The "degree" of a vertex v, denoted by deg(v), is equal to the number of (hyper)edges incident with the vertex v. The "degree sequence" of a (hyper)graph H with vertices v_1, v_2,..., v_n is defined as d(H)=(deg(v_1), deg(v_2), ..., deg(v_n) ). The problem of characterizing degree sequences of k-uniform hypergraphs, i.e. k-hypergraphic sequences, is a long standing open question in hypergraph theory. We investigate properties of degree sequences of k-hypergraphs as well as the properties of their convex hull polytopes. We introduce a set of necessary conditions on a non-negative integral sequence to be k-hypergraphic, dividing vertices and hyperedges into distinct classes. For properties of the convex hull polytopes of k-hypergraphic sequences, first we generalize some known properties of the polytopes of (simple) graphical sequences to the polytope of k-hypergraphic sequences, using our result for properties of k-hypergraphical sequences. Then we reduce an open problem concerning "holes" in the polytope of degree sequences to a well-known problem on lattice points. | ||
| Quan Pham | 07 October 2010 | |
| Title: Using a rule engine for distributed systems management: An exploration using data replication | ||
Dynamic changes in distributed systems are common, due to their many components and the fact that different components are frequently subject to different policies. These changes can make it difficult to construct applications that ensure functionality or performance properties required by users. In order to run efficiently and get high performance, applications must adapt to those changes. However, the logic required to perform this adaptation can be complex and hard to implement and debug, especially when embedded deeply within application code. The goal of this thesis is to examine the possibility of reducing the complexity of distributed applications by using higher-level, declarative approaches to specifying adaptation logic. We have demonstrated a distributed application using an implementation of a replication system using Drools rule engine. From the system evaluation, we can conclude that Drools rule engine interfaces can be used to implement logics of a data replication system easily, while the performance is still the sufficient for decent-size systems. | ||
| Ankan Saha | 09 June 2010 | |
| Title: Topics on Structured Prediction: Problems and Approaches | ||
We consider the task of structured data prediction. Over the last few years, there has been an abundance of data having inherent structure with strong correlation and complex dependencies between different parts of each input. Numerous applications across different disciplines like Part Of Speech tagging, Optical Character Recognition, Pitch accent prediction among others underline the structure in the data which needs to be captured by standard learning algorithms to perform better than standard multivariate classification/regression. In this paper, we survey the existing structured prediction approaches for both training and inference. We show how the different existing training algorithms (maximum margin methods and maximum log-likelihood methods) are extensions of Empirical Risk Minimization schemes to the structured prediction domain. We also review the standard graphical model formalism -which is used to inherently define the structure in most complex data- and the corresponding assumptions which lead to efficient training and prediction algorithms. Most of the existing structured prediction methods heavily depend on the use of joint kernels which do not easily allow them to learn from unlabeled data. Finally we provide a new scheme based on vector valued functions, which provides a rich framework for training and inference and can be seamlessly extended to perform semi-supervised structured learning as well. We formulate a couple of algorithms under the proposed setting and characterize the corresponding classifying functions. | ||
| Atilla Soner Balkir | 16 February 2010 | |
| Title: Statistical Sentence Chunking with Map Reduce | ||
Extracting concepts and relations within statements across a large corpus requires multiple steps of related processes: chunking a sentence into its constituent terms (phrases), semantically classifying them, and creating new statements by assembling those phrases using knowledge extraction. We focus on the first step of this series of processes by the design and evaluation of two novel algorithms for chunking sentences using probabilistic measures. One important aspect of this problem is the requirement for parallel computation methods given scaling up to millions of sentences and billions of words. A key technical component of the implementation is the use of Map Reduce for processing large data sets and Big Table for managing distributed data while providing random, real time read/write access. | ||
| Lars Bergstrom | 20 October 2009 | |
| Title: Arity Raising and Control-Flow Analysis in Manticore | ||
Manticore is a programming language designed for the execution of general-purpose paral- lel programs. Manticore is based on the language Standard ML, includes a wide variety of implicit and explicit features for parallelism, and provides concurrency abstractions based upon Concurrent ML. All of these features rely on good sequential performance, which this work is focused on improving. We present a new control-flow analysis technique, reference counted control-flow analysis (RCCFA). We compare the behavior and performance of RCCFA with several popular control-flow algorithms and provide a new result: in implementation, a collected CFA can perform more slowly than a non-collected CFA. We also provide a novel approach to arity raising that incorporates both unboxing and datatype flattening in a single optimization. These optimizations are currently performed using either a type-directed or reduced quality control-flow analysis and are performed in separate stages by other modern functional language compilers. We show that our arity raising algorithm is effective at reducing code size, decreasing the dynamic number of bytes allocated, and speeding up execution times for different types of programs. | ||
| Morgan Sonderegger | 28 September 2009 | |
| Title: Dynamical systems models of language variation and change: An application to an English stress shift | ||
Both variation and change are widespread in natural languages. In a given linguistic population, there are at any time many instances of variation (for example, the use of either -in or -ing in words such as 'singing'). Most such variation does not lead to change, but variation between two forms is a necessary condition for change from one to the other to occur. Under what conditions does variation lead to change? We combine two existing approaches to this question: building and making observations from historical datasets, and building mathematical models of linguistic populations. We describe the diachronic dynamics of an English stress shift, based on a historical dataset (1600-2000) of 150 words as listed in 75 dictionaries. This dataset shows several common aspects of variation and change: long-term stability followed by rapid change, multiple stable states, long-term stable variation, and word frequency effects. We translate each of these into into dynamical systems terms, as statements about fixed points and bifurcations. We then describe a range of dynamical systems models of populations of language learners, based on several theories from linguistics and cognitive science on the causes of change. We find the fixed points and bifurcations of these models to determine their dynamics as system parameters are varied. We examine which model properties lead to dynamics consistent with our dataset, and with observations about variation and change generally. One generalization which emerges is that successful models incorporate some form of bias in the data learners receive, as well as in the algorithm learners apply to data. | ||
| Jing Tie | 03 August 2009 | |
| Title: Clustering and Recognizing Job Failures in Large-scale Distributed Systems | ||
Large-scale distributed systems are widespread, supporting millions of concurrent users from various application areas. This type of infrastructure provides common, friendly and flexible interfaces, but also is subject to many dynamic failures. The large size and complexity of these systems makes diagnosing failures a challenging problem. This paper focuses on learning and recognizing job failures automatically. First, we study the characteristics of job failures in a large-scale distributed system PanDA (distributed production and distributed analysis system for ATLAS experiments at LHC). Based on two-year records, we have some interesting findings. One is that job failures are both temporally and spatially correlated, and strongly temporal correlated failures are likely to have the same root cause. Bag-of-Tasks applications and their jobs arrival pattern - similar jobs arrive in groups - are big contributors to this phenomena. Based on the findings above, we develop a mechanism for discovering the patterns of different failures, and recognizing similar ones in the future. Our mechanism learns different correlation models between job metrics and job statuses using Bayesian network classifiers, and then extracts a signature for each job based on the best model. It also clusters the similar signatures together, which helps labeling the signatures with different root causes. Then the next time a failure arrives, it can be mapped to the most likely root cause by retrieving from all the labeled signatures. The mechanism continues learning new models using a sliding window, and includes good models to update the ensemble of models. Our experimental results show that the models learned have very high accuracy, high hit rate, and low false alarm rate in predicting job statuses in the near future. By analyzing the centroid signatures in the generated clusters of failures, we illustrate that those clusters can indeed differentiate or indicate meaningful root causes of failures. | ||
| Casey Klein | 17 July 2009 | |
| Title: Experience with Randomized Testing in Programming Languages Metatheory | ||
We explore the use of QuickCheck-style randomized testing in programming languages metatheory, a methodology proposed to reduce development time by revealing shallow errors early, before a formal proof attempt. This exploration begins with the development of a randomized testing framework for PLT Redex, a domain-specific language for specifying and debugging operational semantics. In keeping with the spirit of Redex, the framework is as lightweight as possible—the user encodes a conjecture as a predicate over the terms of the language, and guided by the structure of the language’s grammar, reduction relation, and metafunctions, Redex attempts to falsify the conjecture automatically. In addition to the details of this framework, we present a tutorial demonstrating its use and two case studies applying it to large language specifications. The first study, a postmortem, applies randomized testing to the formal semantics published with the latest revision of the Scheme language standard. Despite a community review period and a comprehensive, manually-constructed test suite, randomized testing in Redex revealed four bugs in the semantics. The second study presents our experience applying the tool concurrently with the development of a formal model for the MzScheme virtual machine and bytecode verifier. In addition to many errors in our formalization, randomized testing revealed six bugs in the core bytecode verification algorithm in production use. The results of these studies suggest that randomized testing is a cheap and effective technique for finding bugs in large programming language metatheories. | ||
| Ross Girshick | 29 May 2009 | |
| Title: Object Detection with Heuristic Coarse-to-Fine Search | ||
We consider the task of localizing and labeling instances of a generic object class within real-world images. Our focus is on a generalized class of pictorial structure models that are defined in terms of visual grammars. In particular, we address the challenging problem of performing detection efficiently even as model complexity grows within this class. Our proposed solution is a blend of heuristic best-first search and a coarse-to-fine detection process. This paper demonstrates that our algorithm can be successfully applied to two special cases of visual grammars: multiscale star models and mixtures of multiscale star models. We show that for problems where the desired output is the local optima of a thresholded function, best-first search gives additional pruning power to coarse-to-fine processes. Unfortunately, admissible heuristics that also provide good best-first search behavior can be difficult or impossible to find in practice. To resolve this deficiency, we provide theoretical results demonstrating that inadmissible heuristics can be used to increase detection speed while only slightly increasing the likelihood of suffering mistakes. The theoretical results are bolstered by strong experimental evidence obtained by applying inadmissible heuristic coarse-to-fine detection to our object recognition system during both training and testing. We increase testing speed by a factor of 2-3 for some classes while maintaining comparable average precision scores on the challenging PASCAL 2007 dataset. Ultimately we expect to see even more significant speed gains when we explore more complex grammar models in future work. | ||
| Sravana Reddy | 20 May 2009 | |
| Title: Part of Speech Induction using Non-negative Matrix Factorization | ||
Unsupervised part-of-speech induction involves the discovery of syntactic categories in a text, given no additional information other than the text itself. One requirement of an induction system is the ability to handle multiple categories for each word, in order to deal with word sense ambiguity. We construct an algorithm for unsupervised part-of-speech induction, treating the problem as one of soft clustering. The key technical component of the algorithm is the application of the recently developed technique of non-negative matrix factorization to the task of category discovery, using word contexts and morphology as syntactic cues. | ||
| Xueyuan Zhou | 05 November 2008 | |
| Title: Exploiting Geometric Structure of High Dimensional Data for Learning:An Empirical Study | ||
In machine learning, high dimensional data generally should have a high degree of freedom. However, recent experiments in machine learning show that real world data in high dimensions is usually governed by a surprisingly low dimensions. We believe that in high dimensions, geometry information, for example, the ``shape'' of data distribution, can help learning algorithms to perform better. A geometric transform of high dimensions targeted for learning is attractive for high dimensional machine learning problems. In this paper, we gave an empirical evaluation by experiments comparing how geometric information of high dimensional data can help for learning. We consider two geometric transforms, Laplacian Eigenmaps and Diffusion maps as our general geometric transforms. Distance in spaces after the transform is discussed. We compared classification results from data in original spaces to a new representation of the data after geometric transforms in various application areas, including image, text, acoustic signals, microarray data, and artificial data sets. Results showed that learning algorithms can take advantage of geometric information for most real world data in high dimensions. When labeled examples are extremely few, geometric transforms showed great improvement in learning. We also found cases when these geometric transforms fail in artificial data sets. General conditions when the transforms can result in better classifications are discussed. | ||
| Joshua Grochow | 04 November 2008 | |
| Title: The Complexity of Equivalence Relations | ||
To determine if two given lists of numbers are the same set, we would sort both lists and see if we get the same result. The sorted list is a canonical form for the equivalence relation of set equality. Other canonical forms for equivalences arise in graph isomorphism and its variants, and the equality of permutation groups given by generators. To determine if two given graphs are cospectral, however, we compute their spectra and see if they are the same set; the graph spectrum is a complete invariant for the equivalence relation of cospectrality. This is weaker than a canonical form, and it is not known whether a canonical form for cospectrality exists. Note that it is a priori possible for an equivalence relation to be decidable in polynomial time without either a complete invariant or canonical form. Blass and Gurevich (``Equivalence relations, invariants, and normal forms, I and II'', 1984) ask whether these conditions on equivalence relations -- having an FP canonical form, having an FP complete invariant, and simply being in P -- are in fact different. They showed that this question requires non-relativizing techniques to resolve. Here we extend their results using generic oracles, and give new connections to probabilistic and quantum computation. We denote the class of equivalence problems in P by PEq, the class of problems with complete FP invariants Ker, and the class with FP canonical forms CF; CF is contained in Ker is contained in PEq, and we ask whether these inclusions are proper. If an equivalence relation R has the property that xRy implies |y| < poly(|x|), we say that R is polynomially bounded; we denote the corresponding classes of equivalence relation CF_p, Ker_p, and PEq_p. Our main results are: 1. If CF = PEq then NP=UP=RP and thus PH=BPP; 2. If CF = Ker then NP=UP, PH=ZPP^{NP}, integers can be factored in probabilistic polynomial time, and deterministic collision-free hash functions do not exist; 3. If Ker = PEq then UP is contained in BQP; 4. There is an oracle relative to which CF, Ker, and PEq are all distinct; 5. There is an oracle relative to which CF_p = Ker_p yet Ker and PEq are distinct. Attempting to generalize the third result above from UP to NP leads to a new open question about the structure of witness sets for NP problems (roughly: can the witness sets for an NP-complete problem form an Abelian group?). We also introduce a natural notion of reduction between equivalence problems, and present several open questions about generalizations of these concepts to the polynomial hierarchy, to logarithmic space, and to counting problems. Many of the new results in this thesis were obtained in collaboration with Lance Fortnow, and have been submitted for conference presentation. | ||
| Gulriz Kurban | ||
| Title: | ||
| Zhihao Liu | ||
| Title: | ||
| Sonjia Waxmonsky | ||
| Title: | ||
| Hoang Trinh | ||
| Title: | ||
| Matthew Hammer | ||
| Title: | ||
| Nicolas Laignel | ||
| Title: | ||

