Coloquio de Estadística y Ciencia de Datos de la Pontificia Universidad Católica de Chile

El departamento de estadística de la pontificia universidad católica de chile tiene unos de los cuerpos académicos más grandes y destacados de las universidades chilenas y latinoamericanas, en su búsqueda por la interrelación regional de los diferentes investigadores del área de estadística y afines, el departamento de estadística busca organizar un seminario local que permita conocer de cerca el trabajo realizado por los investigadores regionales, así como también conocer los problemas actuales en investigación de los académicos de la UC, con la intención de posibilitar puentes de futuras colaboraciones.


2024-05-10
15:30hrs.
Daira Velandia. Universidad de Valparaíso
Estimation methods for a Gaussian process under fixed domain asymptotics
Sala 2
Abstract:
This talk will address some inference tools for Gaussian random fields from the increasing domain and fixed domain asymptotic approaches. First, concepts and previous results are presented. Then, the results obtained after studying some extensions of the problem of estimating covariance parameters under the two asymptotic approaches named above are addressed.
2024-04-19
11:00hrs.
Garritt Page. Brigham Young University
Informed Bayesian Finite Mixture Models via Asymmetric Dirichlet Priors
Auditorio Ninoslav Bralic
Abstract:
Finite mixture models are flexible methods that are commonly used for model-based clustering. A recent focus in the model-based clustering literature is to highlight the difference between the number of components in a mixture model and the number of clusters. The number of clusters is more relevant from a practical standpoint, but to date, the focus of prior distribution formulation has been on the number of components.  In light of this, we develop a finite mixture methodology that permits eliciting prior information directly on the number of clusters in an intuitive way.  This is done by employing an asymmetric Dirichlet distribution as a prior on the weights of a finite mixture.  Further, a penalized complexity motivated prior is employed for the Dirichlet shape parameter.  We illustrate the ease to which prior information can be elicited via our construction and the flexibility of the resulting induced prior on the number of clusters.  We also demonstrate the utility of our approach using numerical experiments and two real world data sets.
2024-04-17
15:00hrs.
Tarik Faouzi. Usach
Marquis de Condorcet: from Condorcet's theory of the vote to the method of partition
Sala de usos multiple 1er piso
Abstract:

We introduce two new approaches to clustering categorical and mixed data: Condorcet clustering with a fixed number of groups, denoted $$\alpha$$-Condorcet and Mixed-Condorcet respectively. As k-modes, this approach is essentially based on similarity and dissimilarity measures. The presentation is divided into three parts: first, we propose a new Condorcet criterion, with a fixed number of groups (to select cases into clusters). In the second part, we propose a heuristic algorithm to carry out the task. In the third part, we compare $$\alpha$$ -Condorcet clustering with k-modes clustering and Mixed-Condorcet with k-prototypes. The comparison is made with a qualitys index, accuracy of a measurement, and a within-cluster sum-of-squares index.

Our findings are illustrated using real datasets: the feline dataset, the US Census 1990 dataset and other data.

2024-04-10
15:00hrs.
Ernesto San Martín. PUC
Explorando el No-Tratamiento de Datos Faltantes: Un Analisis con Datos Administrativos Chilenos
Auditorio Ninoslav Bralic
Abstract:
La Estadística, o los Métodos Estadísticos, como los llamaba Fisher en 1922, ha sido una herramienta fundamental en las ciencias de la observación (para usar la jerga de Laplace y Quetelet) desde el siglo XIX. Su aplicación se basa en tres etapas delineadas por Fisher en un momento en que los principios fundamentales de esta disciplina aún esta[ba]n envueltos en oscuridad”: el problema de especificación, el problema de estimación y el problema de distribución. Es en el problema de especificación donde radica la clave: no solo se trata de definir la distribución de probabilidad que subyace a las observaciones, sino también de representar las preguntas que los investigadores tienen sobre el fenómeno observado mediante parámetros de interés.
 
Una parte importante de la investigación social se basa en observaciones recopiladas a través de encuestas. Nadie está forzado a participar de las encuestas, salvo que una ley establezca lo contrario, como en el caso del Censo o de consultas plebiscitarias. A pesar de esta obligatoriedad, las y los participantes no están obligados a responder cada pregunta de la misma. De ahí que sea frecuente encontrar datos faltantes. Estos datos, junto con las respuestas proporcionadas por otros participantes, forman parte de lo observado. Resulta curioso, entonces, que haya esfuerzos por tratar los datos faltantes de manera que ya no falten (mediante la imputación de datos) para resolver así el problema de especificación.

Sin embargo, una premisa fundamental de las ciencias de la observación es aceptar lo que se observa. Por lo tanto, resulta urgente no-tratar los datos faltantes, sino incluirlos en la especificación del proceso generador de datos. Queremos ilustrar esta forma de proceder utilizando un pequeño panel: los cuestionarios aplicados a padres y cuidadores de escolares en 2004 y 2006. Estos datos contienen varios patrones de datos faltantes: los explicitaremos de modo de identificar parcialmente la distribución condicional del ingreso familiar en el 2006 dado el ingreso familiar en el 2004. Este parámetro de interés formaliza una pregunta sustantiva de nuestro equipo de investigación en el Núcleo Milenio MOVI: cómo ha variado el ingreso de hijos e hijas en comparación con el ingreso de sus padres y madres.
2024-01-18
11:30hrs.
Christian Caamaño . Universidad del Bio-Bio
A flexible Clayton-like spatial copula with application to bounded support data.
Sala 2
Abstract:
The Gaussian copula is a powerful tool that has been widely used to model spatial and/or temporal correlated data with arbitrary marginal distribution. However, this kind of model can potentially be too restrictive since it expresses a reflection symmetric dependence. In this work, we propose a new spatial copula model that allows to obtain random fields with arbitrary marginal distribution with a type of dependence that can be reflection symmetric or not.
Particularly, we propose a new random field with uniform marginal distribution, that can be viewed as a spatial generalization of the classical Clayton copula model. It is obtained through a power transformation of a specific instance of a beta random field which in turn is obtained using a transformation of two independent Gamma random fields.
For the proposed random field we study the second-order properties and we provide analytic expressions for the bivariate distribution and its correlation. Finally, in the reflection symmetric case, we study the associated geometrical properties.
As an application of the proposed model we focus on spatial modeling of data with bounded support. Specifically, we focus on spatial regression models with marginal distribution of the beta type. In a simulation study, we investigate the use of the weighted pairwise composite likelihood method for the estimation of this model. Finally, the effectiveness of our methodology is illustrated by analyzing point-referenced vegetation index data using the Gaussian copula as benchmark. Our developments have been implemented in an open-source package for the R statistical environment.

Keywords: Archimedean Copula, Beta random fields, Composite likelihood, Reflection Asymmetry.
2024-01-18
14:30hrs.
Manuel González. Universidad de la Frontera
Métodos de Regularización Aplicados a Problemas de Quimiometría
Sala 2
2023-12-15
15:00hrs.
Felipe Osorio. Utfsm
Robust estimation in generalized linear models based on maximum Lq-likelihood procedure
Sala Multiuso 1Er Piso, Edificio Felipe Villanueva
Abstract:
In this talk we propose a procedure for robust estimation in the context of generalized linear models based on the maximum Lq-likelihood method. Alongside this, an estimation algorithm that represents a natural extension of the usual iteratively weighted least squares method in generalized linear models is presented. It is through the discussion of the asymptotic distribution of the proposed estimator and a set of statistics for testing linear hypothesis that it is possible to define standardized residuals using the mean-shift outlier model. In addition, robust versions of deviance function and the Akaike information criterion are defined with the aim of providing tools for model selection. Finally, the performance of the proposed methodology is illustrated through a simulation study and analysis of a real dataset.
2023-11-17
15:00hrs.
Hamdi Raissi. Pontificia Universidad Católica de Valparaíso
Analysis of stocks with time-varying illiquidity levels
Sala Multiuso 1Er Piso/2
Abstract:
The first and higher order serial correlations of illiquid stock's price changes are studied, allowing for unconditional heteroscedasticity and time-varying zero returns probability. The dependence structure of the categorical trade/no trade sequence is also studied. Depending on the set up, we investigate how the dependence measures can be accommodated, to deliver an accurate representation of the price changes serial correlations. We shed some light on the properties of the different tools, by means of Monte Carlo experiments. The theoretical arguments are illustrated considering shares from the Chilean stock market and the intraday returns of the Facebook stock.
2023-11-03
15:00 horashrs.
Ramsés Mena. Universidad Nacional Autonoma de México
Random probability measures via dependent stick-breaking priors
Sala Multiuso 1Er Piso/2
Abstract:
I will present a general class of stick-breaking processes with either exchangeable or Markovian length variables. This class generalizes well-known Bayesian nonparametric priors in an unexplored direction. An appealing feature of such a new family of nonparametric priors is that we are able to modulate the stochastic ordering of the weights and recover Dirichlet and Geometric priors as extreme cases.  A general formula for the distribution of the latent allocation variables is derived and an MCMC algorithm is proposed for density estimation purposes.
2023-10-20
15:00hrs.
Alfredo Alegria. Universidad Técnica Federico Santa Maria
Algoritmos de Simulación y Modelación de Covarianza para Campos Aleatorios en Esferas
sala de usos múltiples, 2do. piso Edificio Felipe Villanueva
Abstract:
Los campos aleatorios en esferas desempeñan un papel fundamental en diversas ciencias naturales. Esta presentación aborda dos aspectos clave de manera integrada: algoritmos de simulación y modelación de covarianza para campos aleatorios definidos en la esfera unitaria d-dimensional. Introducimos un algoritmo de simulación, inspirado en el método de bandas rotantes espectrales utilizado en espacios Euclidianos. Este algoritmo genera de manera eficiente campos aleatorios Gaussianos en la esfera, utilizando ondas de Gegenbauer. Por otro lado, exploramos el modelado de la función de covarianza, centrándonos en los desafíos de modelar datos globales sobre la superficie de la Tierra. La familia convencional de funciones de covarianza isotrópicas de Matérn, aunque ampliamente utilizada, enfrenta limitaciones al modelar datos suaves en la esfera debido a restricciones en el parámetro de suavidad. Para abordar esto, proponemos una nueva familia de funciones de covarianza isotrópica adaptada para campos aleatorios esféricos. Esta familia flexible introduce un parámetro que rige la diferenciabilidad en media cuadrática y permite una variedad de dimensiones fractales. Esta presentación destacará las implicaciones prácticas de estos avances a través de experimentos de simulación y aplicaciones con datos reales.
2023-09-29
11:00 horashrs.
Fernando Quintana. Pontificia Universidad Católica de Chile
Childhood obesity in Singapore: A Bayesian nonparametric approach
Sala 1
Abstract:
Overweight and obesity in adults are known to be associated with increased risk of metabolic
and cardiovascular diseases. Obesity has now reached epidemic proportions, increasingly affecting children.
Therefore, it is important to understand if this condition persists from early life to childhood and if
different patterns can be detected to inform intervention policies. Our motivating application is a study
of temporal patterns of obesity in children from South Eastern Asia. Our main focus is on clustering
obesity patterns after adjusting for the effect of baseline information. Specifically, we consider a joint
model for height and weight over time. Measurements are taken every six months from birth. To allow
for data-driven clustering of trajectories, we assume a vector autoregressive sampling model with a dependent
logit stick-breaking prior. Simulation studies show good performance of the proposed model to
capture overall growth patterns, as compared to other alternatives.We also fit the model to the motivating
dataset, and discuss the results, in particular highlighting cluster differences and interpretation.
2023-09-13
12:30hrs.
Pedro Ramos. Pontificia Universidad Católica de Chile
A generalized closed-form maximum likelihood estimator
Sala Multiuso 1Er Piso/2
Abstract:
The maximum likelihood estimator plays a fundamental role in statistics. However, for many models, the estimators do not have closed-form expressions. This limitation can be significant in situations where estimates and predictions need to be computed in real-time, such as in applications based on embedded technology, in which numerical methods can not be implemented. Here we provide a generalization in the maximum likelihood estimator that allows us to obtain the estimators in closed-form expressions under some conditions. Under mild conditions, the estimator is invariant under one-to-one transformations, strongly consistent, and has an asymptotic normal distribution. The proposed generalized version of the maximum likelihood estimator is illustrated on the Gamma, Nakagami, and Beta distributions and compared with the standard maximum likelihood estimator.
2023-09-01
12:30 horashrs.
Jonathan Acosta Salazar. Pontificia Universidad Católica de Chile
Assessing the Estimation of Nearly Singular Covariance Matrices for Modeling Spatial Variables
Sala Multiuso 1Er Piso/2, Facultad de Matemáticas
Abstract:
Spatial analysis commonly relies on the estimation of a covariance matrix associated with a random field. This estimation strongly impacts the prediction where the process has not been observed, which in turn influences the construction of more sophisticated models. If some of the distances between all the possible pairs of observations in the plane are small, then we may have an ill-conditioned problem that results in a nearly singular covariance matrix. In this paper, we suggest a covariance matrix estimation method that works well even when there are very close pairs of locations on the plane. Our method is an extension to a spatial case of a method that is based on the estimation of eigenvalues of the unitary matrix decomposition of the covariance matrix. Several numerical examples are conducted to provide evidence of good performance in estimating the range parameter of the correlation structure of a spatial regression process. In addition, an application to macroalgae estimation in a restricted area of the Pacific Ocean is developed to determine a suitable estimation of the effective sample size associated with the transect sampling scheme.
2023-07-05
11:30hrs.
Riccardo Corradin. University of Nottingham
A journey through model-based clustering with intractable distributions
Sala 2
Abstract:
Model-based clustering represents one of the fundamental procedures in a statistician's toolbox. Within the model-based clustering framework, we consider the case where the kernel distribution of nonparametric mixture models is available only up to an intractable normalizing constant, in which most of the commonly used Markov chain Monte Carlo methods fail to provide posterior inference. To overcome this problem, we propose an approximate Bayesian computational strategy, whereby we approximate the posterior to avoid the intractability of the kernel. By exploiting the structure of the nonparametric prior, our proposal combines the use of predictive distributions as a proposal with transport maps to obtain an efficient and flexible sampling strategy. Further, we illustrate how the specification of our proposal can be relaxed by introducing an adaptive scheme on the degree of approximation of the posterior distribution. Empirical evidence from simulation studies shows that our proposal outperforms its main competitors in terms of computational times while preserving comparable accuracy of the estimates.
2023-06-28
11:30hrs.
Ricardo Cunha Pedroso. Universidade Federal de Minas Gerais
Multipartition model for multiple change point identification
Sala 1
Abstract:
The product partition model (PPM) is widely used for detecting multiple change points. Because changes in different parameters may occur at different times, the PPM fails to identify which parameters experienced the changes. To solve this limitation, we introduce a multipartition model to detect multiple change points occurring in several parameters. It assumes that changes experienced by each parameter generate a different random partition along the time axis, which facilitates identifying those parameters that changed and the time when they do so. We apply our model to detect multiple change points in Normal means and variances. Simulations and data illustrations show that the proposed model is competitive and enriches the analysis of change point problems.
2023-06-16
12:00hrs.
Victor Hugo Lachos Davila. Department of Statistics, University of Connecticut
Lasso regularization for censored regression and high-dimensional predictors
Sala 1
Abstract:
The censored regression model, also known as the Tobit model, is designed to estimate linear relationships between variables in cases where the dependent variable is either left- or right-censored. In this study, we propose a heuristic expectation–maximization (EM) algorithm for handling censored regression models with Lasso regularization for variable selection, and to accommodate high-dimensional predictors. The procedure is computationally efficient and easily implemented. We describe how this technique can be easily implemented using available R packages. The proposed methods are assessed using simulated and two real datasets, in cases where p is less or equal, or greater than n.
2023-06-07
11:30hrs.
Cristóbal Guzmán. PUC
Advances in Differentially Prívate Stochastic Saddle Points.
Sala 2
2023-05-31
11:30hrs.
Matteo Gianella. Dipartimento Di Matematica, Politecnico Di Milano
(Gaussian) graphical models: two examples in Bayesian Statistics
Sala 2
Abstract:
Graphical models are a powerful tool when the aim is to study dependencies between random variables. In this talk, we give two examples of their use under the Bayesian paradigm. One is  motivated by the analysis of spectrometric data. We introduce a Gaussian graphical model for learning the dependence structure among frequency bands of the infrared absorbance spectrum. The spectra are modeled as continuous functional data through a B-spline basis expansion, while a Gaussian graphical model is assumed as a prior specification for the smoothing coefficients to induce sparsity in the associated precision matrix. The second example focuses on areal data,  where the neighbouring structure is modeled via an undirected graph. We extend the model  in Beraha et al. (2021) to jointly perform density estimation and boundary detection.
2023-05-10
11:30hrs.
Francisco Cuevas. Universidad Técnica Federico Santa María.
Modelando la función de intensidad de un patrón puntual espacial usando un modelo de log-convolución
Sala 2 Rolando Chuaqui
Abstract:
En el análisis de patrones puntuales, estimar la función de intensidad de primer orden es muy importante, y es por esto que diferentes perpectivas han sido adoptadas. En este trabajo, y motivados por datos de las sacadas oculares, introducimos un modelo para el logaritmo de la intensidad basada en la convolución entre una covariable y una función a estimar, β(·). Mostramos que, basados en el análisis de Fourier, el problema de estimar β(·) es equivalente a estimar infinitos par ?ametros. Luego de truncar, proponemos un método de estimación penalizado para resolver este problema.