1,340
1.3K

Oct 26, 2006
10/06

by
Joseph Lipka

texts

#
eye 1,340

#
favorite 0

#
comment 0

A theoretical treatment of what can be computed and how fast it can be done. Applications to compilers, string searching, and control circuit design will be discussed. The hierarchy of finite state machines, pushdown machines, context free grammars and Turing machines will be analyzed, along with their variations. The notions of decidability, complexity theory and a complete discussion of NP-Complete problems round out the course. Text: Introduction to the Theory of Computation, Michael Sipser....

favoritefavoritefavoritefavoritefavorite ( 8 reviews )

Topic: computation

8
8.0

Jun 30, 2018
06/18

by
Florian Gerber; Kaspar Mösinger; Reinhard Furrer

texts

#
eye 8

#
favorite 0

#
comment 0

The R functions .C() and .Fortran() can be used to call compiled C/C++ and Fortran code from R. This so-called foreign function interface is convenient, since it does not require any interactions with the C API of R. However, it does not support long vectors (i.e., vectors of more than 2^31 elements). To overcome this limitation, the R package dotCall64 provides .C64(), which can be used to call compiled C/C++ and Fortran functions. It transparently supports long vectors and does the necessary...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1702.08188

8
8.0

Jun 30, 2018
06/18

by
Hien D Nguyen

texts

#
eye 8

#
favorite 0

#
comment 0

Mean shift (MS) algorithms are popular methods for mode finding in pattern analysis. Each MS algorithm can be phrased as a fixed-point iteration scheme, which operates on a kernel density estimate (KDE) based on some data. The ability of an MS algorithm to obtain the modes of its KDE depends on whether or not the fixed-point scheme converges. The convergence of MS algorithms have recently been proved under some general conditions via first principle arguments. We complement the recent proofs by...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1703.02337

5
5.0

Jun 30, 2018
06/18

by
Adam Persing; Ajay Jasra; Alexandros Beskos; David Balding; Maria De Iorio

texts

#
eye 5

#
favorite 0

#
comment 0

We observe $n$ sequences at each of $m$ sites, and assume that they have evolved from an ancestral sequence that forms the root of a binary tree of known topology and branch lengths, but the sequence states at internal nodes are unknown. The topology of the tree and branch lengths are the same for all sites, but the parameters of the evolutionary model can vary over sites. We assume a piecewise constant model for these parameters, with an unknown number of change-points and hence a...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1408.6317

4
4.0

Jun 30, 2018
06/18

by
Adrien Todeschini; François Caron; Marc Fuentes; Pierrick Legrand; Pierre Del Moral

texts

#
eye 4

#
favorite 0

#
comment 0

Biips is a software platform for automatic Bayesian inference with interacting particle systems. Biips allows users to define their statistical model in the probabilistic programming BUGS language, as well as to add custom functions or samplers within this language. Then it runs sequential Monte Carlo based algorithms (particle filters, particle independent Metropolis-Hastings, particle marginal Metropolis-Hastings) in a black-box manner so that to approximate the posterior distribution of...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1412.3779

3
3.0

Jun 30, 2018
06/18

by
Marco Giordan; Federico Vaggi; Ron Wehrens

texts

#
eye 3

#
favorite 0

#
comment 0

The Levenberg-Marquardt algorithm is a flexible iterative procedure used to solve non-linear least squares problems. In this work we study how a class of possible adaptations of this procedure can be used to solve maximum likelihood problems when the underlying distributions are in the exponential family. We formally demonstrate a local convergence property and we discuss a possible implementation of the penalization involved in this class of algorithms. Applications to real and simulated...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1410.0793

6
6.0

Jun 30, 2018
06/18

by
James Ridgway

texts

#
eye 6

#
favorite 0

#
comment 0

We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian falls inside a quadrant. The Geweke-Hajivassiliou-Keane (GHK) algorithm [Genz, 1992; Geweke, 1991; Hajivassiliou et al., 1996; Keane, 1993], is currently used for integrals of dimension greater than 10. In this paper we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state space model using sequential importance sampling (SIS). We show for...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1411.1314

6
6.0

Jun 30, 2018
06/18

by
Virgilio Gómez-Rubio; Francisco Palmí-Perales

texts

#
eye 6

#
favorite 0

#
comment 0

The Integrated Nested Laplace Approximation (INLA) is a convenient way to obtain approximations to the posterior marginals for parameters in Bayesian hierarchical models when the latent effects can be expressed as a Gaussian Markov Random Field (GMRF). In addition, its implementation in the R-INLA package for the R statistical software provides an easy way to fit models using INLA in practice. R-INLA implements a number of widely used latent models, including several spatial models. In...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1702.03891

4
4.0

Jun 30, 2018
06/18

by
Negin Alemazkoor; Hadi Meidani

texts

#
eye 4

#
favorite 0

#
comment 0

Compressive sampling has become a widely used approach to construct polynomial chaos surrogates when the number of available simulation samples is limited. Originally, these expensive simulation samples would be obtained at random locations in the parameter space. It was later shown that the choice of sample locations could significantly impact the accuracy of resulting surrogates. This motivated new sampling strategies or design-of-experiment approaches, such as coherence-optimal sampling,...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1702.07830

5
5.0

Jun 30, 2018
06/18

by
Iker Perez; David Hodge; Theodore Kypraios

texts

#
eye 5

#
favorite 0

#
comment 0

Queue networks describe complex stochastic systems of both theoretical and practical interest. They provide the means to assess alterations, diagnose poor performance and evaluate robustness across sets of interconnected resources. In the present paper, we focus on the underlying continuous-time Markov chains induced by these networks, and we present a flexible method for drawing parameter inference in multi-class Markovian cases with switching and different service disciplines. The approach is...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1703.03475

5
5.0

Jun 30, 2018
06/18

by
Qingyuan Zhao; Trevor Hastie; Daryl Pregibon

texts

#
eye 5

#
favorite 0

#
comment 0

We consider the problem where we have a multi-way table of means, indexed by several factors, where each factor can have a large number of levels. The entry in each cell is the mean of some response, averaged over the observations falling into that cell. Some cells may be very sparsely populated, and in extreme cases, not populated at all. We might still like to estimate an expected response in such cells. We propose here a novel hierarchical ANOVA (HANOVA) representation for such data. Sparse...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1703.02081

19
19

Jun 27, 2018
06/18

by
Joaquin Miguez; Manuel A. Vazquez

texts

#
eye 19

#
favorite 0

#
comment 0

Distributed signal processing algorithms have become a hot topic during the past years. One class of algorithms that have received special attention are particles filters (PFs). However, most distributed PFs involve various heuristic or simplifying approximations and, as a consequence, classical convergence theorems for standard PFs do not hold for their distributed counterparts. In this paper, we analyze a distributed PF based on the non-proportional weight-allocation scheme of Bolic {\em et...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1504.01079

21
21

Jun 28, 2018
06/18

by
Mauricio Zevallos; Loretta Gasco; Ricardo Ehlers

texts

#
eye 21

#
favorite 0

#
comment 0

In this paper we perform Bayesian estimation of stochastic volatility models with heavy tail distributions using Metropolis adjusted Langevin (MALA) and Riemman manifold Langevin (MMALA) methods. We provide analytical expressions for the application of these methods, assess the performance of these methodologies in simulated data and illustrate their use on two financial time series data sets.

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1507.05079

21
21

Jun 28, 2018
06/18

by
Michael Ludkovski; Katherine Shatskikh

texts

#
eye 21

#
favorite 0

#
comment 0

Traditional epidemic detection algorithms make decisions using only local information. We propose a novel approach that explicitly models spatial information fusion from several metapopulations. Our method also takes into account cost-benefit considerations regarding the announcement of epidemic. We utilize a compartmental stochastic model within a Bayesian detection framework which leads to a dynamic optimization problem. The resulting adaptive, non-parametric detection strategy optimally...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1509.04229

8
8.0

Jun 29, 2018
06/18

by
Meng Hwee Victor Ong; David J. Nott; Ajay Jasra

texts

#
eye 8

#
favorite 0

#
comment 0

Flexible regression methods where interest centres on the way that the whole distribution of a response vector changes with covariates are very useful in some applications. A recently developed technique in this regard uses the matrix-variate Dirichlet process as a prior for a mixing distribution on a coefficient in a multivariate linear regression model. The method is attractive, particularly in the multivariate setting, for the convenient way that it allows for borrowing strength across...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1602.08849

5
5.0

Jun 29, 2018
06/18

by
Dennis Prangle; Richard G. Everitt; Theodore Kypraios

texts

#
eye 5

#
favorite 0

#
comment 0

Approximate Bayesian computation (ABC) methods permit approximate inference for intractable likelihoods when it is possible to simulate from the model. However they perform poorly for high dimensional data, and in practice must usually be used in conjunction with dimension reduction methods, resulting in a loss of accuracy which is hard to quantify or control. We propose a new ABC method for high dimensional data based on rare event methods which we refer to as RE-ABC. This uses a latent...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1611.02492

5
5.0

Jun 30, 2018
06/18

by
Boqian Zhang; Vinayak Rao

texts

#
eye 5

#
favorite 0

#
comment 0

Markov jump processes (MJPs) are continuous-time stochastic processes that find wide application in a variety of disciplines. Inference for MJPs typically proceeds via Markov chain Monte Carlo, the state-of-the-art being an auxiliary variable Gibbs sampler proposed recently. This algorithm was designed for the situation where the MJP parameters are known, and Bayesian inference over unknown parameters is typically carried out by incorporating this into a larger Gibbs sampler. This strategy of...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1704.02369

15
15

Jun 27, 2018
06/18

by
Fredrik Lindsten; Pete Bunch; Simo Särkkä; Thomas B. Schön; Simon J. Godsill

texts

#
eye 15

#
favorite 0

#
comment 0

Sequential Monte Carlo (SMC) methods, such as the particle filter, are by now one of the standard computational techniques for addressing the filtering problem in general state-space models. However, many applications require post-processing of data offline. In such scenarios the smoothing problem--in which all the available data is used to compute state estimates--is of central interest. We consider the smoothing problem for a class of conditionally linear Gaussian models. We present a...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1505.06357

11
11

Jun 28, 2018
06/18

by
A. Garbuno-Inigo; F. A. DiazDelaO; K. M. Zuev

texts

#
eye 11

#
favorite 0

#
comment 0

Gaussian process emulators of computationally expensive computer codes provide fast statistical approximations to model physical processes. The training of these surrogates depends on the set of design points chosen to run the simulator. Due to computational cost, such training set is bound to be limited and quantifying the resulting uncertainty in the hyper-parameters of the emulator by uni-modal distributions is likely to induce bias. In order to quantify this uncertainty, this paper proposes...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1506.08010

14
14

Jun 28, 2018
06/18

by
Olivier Féron; François Orieux; Jean-François Giovannelli

texts

#
eye 14

#
favorite 0

#
comment 0

This paper deals with Gibbs samplers that include high dimensional conditional Gaussian distributions. It proposes an efficient algorithm that avoids the high dimensional Gaussian sampling and relies on a random excursion along a small set of directions. The algorithm is proved to converge, i.e. the drawn samples are asymptotically distributed according to the target distribution. Our main motivation is in inverse problems related to general linear observation models and their solution in a...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1509.03495

6
6.0

Jun 30, 2018
06/18

by
Julien Chiquet; Pierre Gutierrez; Guillem Rigaill

texts

#
eye 6

#
favorite 0

#
comment 0

Given a data set with many features observed in a large number of conditions, it is desirable to fuse and aggregate conditions which are similar to ease the interpretation and extract the main characteristics of the data. This paper presents a multidimensional fusion penalty framework to address this question when the number of conditions is large. If the fusion penalty is encoded by an $\ell_q$-norm, we prove for uniform weights that the path of solutions is a tree which is suitable for...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1407.5915

17
17

Jun 28, 2018
06/18

by
L. Martino; F. Louzada

texts

#
eye 17

#
favorite 0

#
comment 0

The multiple Try Metropolis (MTM) algorithm is an advanced MCMC technique based on drawing and testing several candidates at each iteration of the algorithm. One of them is selected according to certain weights and then it is tested according to a suitable acceptance probability. Clearly, since the computational cost increases as the employed number of tries grows, one expects that the performance of an MTM scheme improves as the number of tries increases, as well. However, there are scenarios...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1508.04253

5
5.0

Jun 28, 2018
06/18

by
S. Agapiou; O. Papaspiliopoulos; D. Sanz-Alonso; A. M. Stuart

texts

#
eye 5

#
favorite 0

#
comment 0

The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1511.06196

5
5.0

Jun 30, 2018
06/18

by
Moumita Das; Sourabh Bhattacharya

texts

#
eye 5

#
favorite 0

#
comment 0

In this article, we propose a novel and general dimension-hopping MCMC methodology that can update all the parameters as well as the number of parameters simultaneously using simple deterministic transformations of some low-dimensional (often one-dimensional) random variable. This methodology, which has been inspired by the recent Transformation based MCMC (TMCMC) for updating all the parameters simultaneously in general fixed-dimensional set-ups using low-dimensional random variables,...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1403.5207

15
15

Jun 27, 2018
06/18

by
Alexandros Beskos; Ajay Jasra; Kody Law; Raul Tempone; Yan Zhou

texts

#
eye 15

#
favorite 0

#
comment 0

In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level $h_L$. In addition, the expectation cannot be computed analytically and one often resorts to Monte...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1503.07259

9
9.0

Jun 29, 2018
06/18

by
Sharon X Lee; Kaleb L Lee; Geoffrey J McLachlan

texts

#
eye 9

#
favorite 0

#
comment 0

Finite mixture models have been widely used for the modelling and analysis of data from heterogeneous populations. Maximum likelihood estimation of the parameters is typically carried out via the Expectation-Maximization (EM) algorithm. The complexity of the implementation of the algorithm depends on the parametric distribution that is adopted as the component densities of the mixture model. In the case of the skew normal and skew t-distributions, for example, the E-step would involve...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1606.02054

8
8.0

Jun 29, 2018
06/18

by
Omiros Papaspiliopoulos; David Rossell

texts

#
eye 8

#
favorite 0

#
comment 0

We propose a scalable algorithmic framework for exact Bayesian variable selection and model averaging in linear models under the assumption that the Gram matrix is block-diagonal, and as a heuristic for exploring the model space for general designs. In block-diagonal designs our approach returns the most probable model of any given size without resorting to numerical integration. The algorithm also provides a novel and efficient solution to the frequentist best subset selection problem for...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1606.03749

4
4.0

Jun 29, 2018
06/18

by
Jiwoong Kim

texts

#
eye 4

#
favorite 0

#
comment 0

This article provides a full description of the R package KoulMde which is designed for Koul's minimum distance estimation method. When we encounter estimation problems in the linear regression and autogressive models, this package provides more efficient estimators than other R packages.

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1606.04182

18
18

Jun 28, 2018
06/18

by
Alexander Gribov

texts

#
eye 18

#
favorite 0

#
comment 0

One of the most efficient ways to produce unconditional simulations is with the spectral method using fast Fourier transform (FFT) [1]. But this approach is not applicable to arbitrary surfaces because no regular grid exists. However, points on the arbitrary surface can be generated randomly using uniform distribution to replace a regular grid. This paper will describe a nonstationary kernel convolution approach for data on arbitrary surfaces.

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1509.01745

4
4.0

Jun 28, 2018
06/18

by
Michael Beard; Ba-Tuong Vo; Ba-Ngu Vo; Sanjeev Arulampalam

texts

#
eye 4

#
favorite 0

#
comment 0

The generalized labeled multi-Bernoulli (GLMB) is a family of tractable models that alleviates the limitations of the Poisson family in dynamic Bayesian inference of point processes. In this paper, we derive closed form expressions for the void probability functional and the Cauchy-Schwarz divergence for GLMBs. The proposed analytic void probability functional is a necessary and sufficient statistic that uniquely characterizes a GLMB, while the proposed analytic Cauchy-Schwarz divergence...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1510.05532

21
21

Jun 27, 2018
06/18

by
Jingjing Li; David J. Nott; Yanan Fan; Scott A. Sisson

texts

#
eye 21

#
favorite 0

#
comment 0

Approximate Bayesian computation (ABC) refers to a family of inference methods used in the Bayesian analysis of complex models where evaluation of the likelihood is difficult. Conventional ABC methods often suffer from the curse of dimensionality, and a marginal adjustment strategy was recently introduced in the literature to improve the performance of ABC algorithms in high-dimensional problems. The marginal adjustment approach is extended using a Gaussian copula approximation. The method...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1504.04093

10
10.0

Jun 27, 2018
06/18

by
Sagnik Datta; Ghislaine Gayraud; Eric Leclerc; Frederic Y. Bois

texts

#
eye 10

#
favorite 0

#
comment 0

Bayesian networks (BNs) are widely used graphical models usable to draw statistical inference about Directed acyclic graphs (DAGs). We presented here Graph_sampler a fast free C language software for structural inference on BNs. Graph_sampler uses a fully Bayesian approach in which the marginal likelihood of the data and prior information about the network structure are considered. This new software can handle both the continuous as well discrete data and based on the data type two different...

Topics: Statistics, Computation

Source: http://arxiv.org/abs/1505.07228

5
5.0

Jun 29, 2018
06/18

by
Hien D Nguyen; Geoffrey J McLachlan; Jeremy F P Ullmann; Andrew L Janke

texts

#
eye 5

#
favorite 0

#
comment 0

Functional data analysis (FDA) is an important modern paradigm for handling infinite-dimensional data. An important task in FDA is model-based clustering, which organizes functional populations into groups via subpopulation structures. The most common approach for model-based clustering of functional data is via mixtures of linear mixed-effects models. The mixture of linear mixed-effects models (MLMM) approach requires a computationally intensive algorithm for estimation. We provide a novel...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1608.05481

5
5.0

Jun 29, 2018
06/18

by
Leonardo Egidi; Roberta Pappadà; Francesco Pauli; Nicola Torelli

texts

#
eye 5

#
favorite 0

#
comment 0

An algorithm for extracting identity submatrices of small rank and pivotal units from large and sparse matrices is proposed. The procedure has already been satisfactorily applied for solving the label switching problem in Bayesian mixture models. Here we introduce it on its own and explore possible applications in different contexts.

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1611.01069

8
8.0

Jun 29, 2018
06/18

by
Aliaksandr Hubin; Geir Storvik

texts

#
eye 8

#
favorite 0

#
comment 0

The marginal likelihood is a well established model selection criterion in Bayesian statistics. It also allows to efficiently calculate the marginal posterior model probabilities that can be used for Bayesian model averaging of quantities of interest. For many complex models, including latent modeling approaches, marginal likelihoods are however difficult to compute. One recent promising approach for approximating the marginal likelihood is Integrated Nested Laplace Approximation (INLA), design...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1611.01450

10
10.0

Jun 29, 2018
06/18

by
Hillary Fairbanks; Alireza Doostan; Christian Ketelsen; Gianluca Iaccarino

texts

#
eye 10

#
favorite 0

#
comment 0

Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1611.02213

7
7.0

Jun 29, 2018
06/18

by
Simon Grund; Oliver Lüdtke; Alexander Robitzsch

texts

#
eye 7

#
favorite 0

#
comment 0

The treatment of missing data can be difficult in multilevel research because state-of-the-art procedures such as multiple imputation (MI) may require advanced statistical knowledge or a high degree of familiarity with certain statistical software. In the missing data literature, pan has been recommended for MI of multilevel data. In this article, we provide an introduction to MI of multilevel missing data using the R package pan, and we discuss its possibilities and limitations in...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1611.03112

5
5.0

Jun 29, 2018
06/18

by
Bilal Saad; Alen Alexanderian; Serge Prudhomme; Omar M. Knio

texts

#
eye 5

#
favorite 0

#
comment 0

This work focuses on the simulation of $\CO$ storage in deep underground formations under uncertainty and seeks to understand the impact of uncertainties in reservoir properties on $\CO$ leakage. To simulate the process, a non-isothermal two-phase two-component flow system with equilibrium phase exchange is used. Since model evaluations are computationally intensive, instead of traditional Monte Carlo methods, we rely on polynomial chaos (PC) expansions for representation of the stochastic...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1602.02736

8
8.0

Jun 29, 2018
06/18

by
Andrew Holbrook; Shiwei Lan; Alexander Vandenberg-Rodes; Babak Shahbaba

texts

#
eye 8

#
favorite 0

#
comment 0

We extend the application of Hamiltonian Monte Carlo to allow for sampling from probability distributions defined over symmetric or Hermitian positive definite matrices. To do so, we exploit the Riemannian structure induced by Cartan's century-old canonical metric. The geodesics that correspond to this metric are available in closed-form and---within the context of Lagrangian Monte Carlo---provide a principled way to travel around the space of positive definite matrices. Our method improves...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1612.08224

5
5.0

Jun 30, 2018
06/18

by
Manuela Zucknick; Sylvia Richardson

texts

#
eye 5

#
favorite 0

#
comment 0

In large-scale genomic applications vast numbers of molecular features are scanned in order to find a small number of candidates which are linked to a particular disease or phenotype. This is a variable selection problem in the "large p, small n" paradigm where many more variables than samples are available. Additionally, a complex dependence structure is often observed among the markers/genes due to their joint involvement in biological processes and pathways. Bayesian variable...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1402.2713

9
9.0

Jun 30, 2018
06/18

by
Peter G. M. Forbes; Kanti V. Mardia

texts

#
eye 9

#
favorite 0

#
comment 0

Motivated by molecular biology, there has been an upsurge of research activities in directional statistics in general and its Bayesian aspect in particular. The central distribution for the circular case is von Mises distribution which has two parameters (mean and concentration) akin to the univariate normal distribution. However, there has been a challenge to sample efficiently from the posterior distribution of the concentration parameter. We describe a novel, highly efficient algorithm to...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1402.3569

8
8.0

Jun 30, 2018
06/18

by
Zhigang Yao; William F. Eddy

texts

#
eye 8

#
favorite 0

#
comment 0

Magnetoencephalography (MEG) is an imaging technique used to measure the magnetic field outside the human head produced by the electrical activity inside the brain. The MEG inverse problem, identifying the location of the electrical sources from the magnetic signal measurements, is ill-posed, that is, there are an infinite number of mathematically correct solutions. Common source localization methods assume the source does not vary with time and do not provide estimates of the variability of...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1401.2395

14
14

Jun 26, 2018
06/18

by
Eduardo F. Mendes; Marcel Scharth; Robert Kohn

texts

#
eye 14

#
favorite 0

#
comment 0

We introduce a new Markov chain Monte Carlo (MCMC) sampler called the Markov Interacting Importance Sampler (MIIS). The MIIS sampler uses conditional importance sampling (IS) approximations to jointly sample the current state of the Markov Chain and estimate conditional expectations, possibly by incorporating a full range of variance reduction techniques. We compute Rao-Blackwellized estimates based on the conditional expectations to construct control variates for estimating expectations under...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1502.07039

6
6.0

Jun 30, 2018
06/18

by
Douglas Bates; Martin Mächler; Ben Bolker; Steve Walker

texts

#
eye 6

#
favorite 0

#
comment 0

Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1406.5823

6
6.0

Jun 30, 2018
06/18

by
Vinayak Rao; Lizhen Lin; David Dunson

texts

#
eye 6

#
favorite 0

#
comment 0

We present a data augmentation scheme to perform Markov chain Monte Carlo inference for models where data generation involves a rejection sampling algorithm. Our idea, which seems to be missing in the literature, is a simple scheme to instantiate the rejected proposals preceding each data point. The resulting joint probability over observed and rejected variables can be much simpler than the marginal distribution over the observed variables, which often involves intractable integrals. We...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1406.6652

7
7.0

Jun 29, 2018
06/18

by
Kainan Wang; Tan Bui-Thanh; Omar Ghattas

texts

#
eye 7

#
favorite 0

#
comment 0

We present a randomized maximum a posteriori (rMAP) method for generating approximate samples of posteriors in high dimensional Bayesian inverse problems governed by large-scale forward problems. We derive the rMAP approach by: 1) casting the problem of computing the MAP point as a stochastic optimization problem; 2) interchanging optimization and expectation; and 3) approximating the expectation with a Monte Carlo method. For a specific randomized data and prior mean, rMAP reduces to the...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1602.03658

5
5.0

Jun 29, 2018
06/18

by
Ajay Jasra; Kody Law; Yan Zhou

texts

#
eye 5

#
favorite 0

#
comment 0

This paper considers uncertainty quantification for an elliptic nonlocal equation. In particular, it is assumed that the parameters which define the kernel in the nonlocal operator are uncertain and a priori distributed according to a probability measure. It is shown that the induced probability measure on some quantities of interest arising from functionals of the solution to the equation with random inputs is well-defined; as is the posterior distribution on parameters given observations. As...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1603.06381

6
6.0

Jun 29, 2018
06/18

by
K. V. P. Barco; J. Mazucheli; V. Janeiro

texts

#
eye 6

#
favorite 0

#
comment 0

Several probability distributions have been proposed in the literature, especially with the aim of obtaining models that are more flexible relative to the behaviors of the density and hazard rate functions. Recently, a new generalization of the Lindley distribution was proposed by Ghitany et al. (2013), called power Lindley distribution. Another generaliza- tion was proposed by Sharma et al. (2015), known as inverse Lindley distribution. In this paper, a new distribution is proposed, which is...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1604.07080

5
5.0

Jun 29, 2018
06/18

by
George Deligiannidis; Anthony Lee

texts

#
eye 5

#
favorite 0

#
comment 0

We show that the class of $L^2$ functions for which ergodic averages of a reversible Markov chain have finite asymptotic variance is determined by the class of $L^2$ functions for which ergodic averages of its associated jump chain have finite asymptotic variance. This allows us to characterize completely which ergodic averages have finite asymptotic variance when the Markov chain is an independence sampler. In addition, we obtain a simple sufficient condition for all ergodic averages of $L^2$...

Topics: Computation, Statistics

Source: http://arxiv.org/abs/1606.08373