**Understand the fundamentals of probabilistic noise reduction diffusion models**

**1**.**Introduction**

In the recent past, I've talked about GAN and VAE as two important generative models that have been very successful and recognized. GANs work great for many applications; however, they are difficult to train and their output lacks diversity due to various challenges such as mode collapse and disappearing gradients, to name a few. Although VAEs have the strongest theoretical basis, modeling a good loss function is a challenge in VAEs that makes their output suboptimal.

There is another set of techniques that originate from probabilistic methods of likelihood estimation and are inspired by physical phenomena; is called Diffusion Models. The core idea behind Diffusion Models comes from the thermodynamics of gas molecules, where molecules diffuse from areas of high density to areas of low density. This movement is often referred to in the physics literature as entropy increase or heat death. In information theory, this amounts to the loss of information due to the gradual intervention of noise.

The key concept in diffusion modeling is that if we could build a learning model that could learn the systematic decay of information due to noise, it would be possible to reverse the process and thus recover the information from the noise. This concept is similar to VAE in that it tries to optimize an objective function by first projecting the data into latent space and then recovering it to the initial state. However, instead of learning the data distribution, the system tries to model a series of noise distributions in a*Markov chain*and "decodes" the data by de-noising/de-noising the data in a hierarchical fashion.

**2.** **Noise Reduction Diffusion Model**

The idea of removing noise from the diffusion model has been around for a long time. It has its roots in the concept of diffusion maps, which is one of the dimensionality reduction techniques used in machine learning literature. It also borrows concepts from probabilistic methods such as*Markov chains*which has been used in many applications. The original Denoising Diffusion method was proposed in*Sohl-Dickstein et al.*. [1].

A diffusion model of noise reduction is a two-step process: the forward diffusion process and the reverse or reconstruction process. In the forward diffusion process, Gaussian noise is successively introduced until the data becomes just noise. The inverse reconstruction process is done by learning the conditional probability densities using a neural network model. An example representation of this process can be seen in Figure 1.

**3.** **referral process**

We can formally define the process of direct diffusion as a*Markov chain*and therefore, unlike a coder in the VAEs, does not require training. Starting with the initial data point, we add Gaussian noise to** T**successive steps and obtain a set of noisy samples.

**Predicting probability density over time**

**depends only on the immediate ancestor at the moment**

*t***and therefore the conditional probability density can be calculated as follows:**

*t-1*The complete distribution of the entire process can be calculated as follows:

Here, the mean and variance of the density function depend on a parameter βτ, which is a hyperparameter whose value can be taken as constant throughout the process or can be gradually changed in successive steps. For a differential parameter value assignment, there may be a function range that can be used to model the behavior (eg, sigmoid, tanh, linear, etc.).

However, the above derivation is sufficient to predict successive states if we wish to sample over any time interval.** t**without going through all the intermediate steps, thus allowing for an efficient implementation, then we can reformulate the above equation by replacing the hyperparameter as ατ = 1 — βτ

**.**Rephrasing what was said above becomes:

To produce samples in one time step** t**with probability density estimate available in the time interval

**we can use another concept from thermodynamics called, '**

*t-1,**Langevin dynamics*. According to

*Langevin dynamic stochastic gradient*[2] we can sample the new states of the system just by the gradient of the density function in a

*Markov chain*updates Sampling one new data point at a time

**for a step size**

*t***mi**based on an earlier point in time

**then it can be calculated as follows:**

*t-1***4.** **Reconstruction**

The inverse process requires estimating the probability density at an earlier time step given the current state of the system. That means appreciating the**q**(hkt-1 | hht) when*t=*** T **and thus generating samples of isotropic Gaussian noise data. However, unlike the direct process, estimating the previous state from the current state requires knowledge of all past gradients which we cannot obtain without having a learning model that can predict such estimates. Therefore, we will have to train a neural network model that estimates the

**rth**(χτ-1|χτ) based on learned weights

**eu**and the current state at the moment

**. This can be estimated as follows:**

*t*The parameterization of the mean function was proposed by*Lo. and others*[3] and can be calculated as follows:

The authors in*Lo. and others*[3] suggested using a fixed variance function like Σθ = βτ. The sample at the moment** t-1**then it can be calculated as follows:

**5.** **Training and Results**

**5.1.** **Model Building**

The model used in the broadcast training model follows patterns similar to a VAE network; however, it is often kept much simpler and more straightforward compared to other network architectures. The input tier has the same input size as the data dimensions. There can be multiple hidden layers depending on the depth of the network requirements. The intermediate layers are linear layers with respective activation functions. The final layer returns to the same size as the original input layer, thus reconstructing the original data. At the*Denoising Broadcast Networks*The final layer consists of two separate outputs, each dedicated to the mean and variance of the predicted probability density, respectively.

**5.2.** **Calculation of the loss function**

The objective of the network model is to optimize the following loss function:

A reduced form of this loss function was proposed in*Sohl-Dickstein et al.*[1] which formulates the loss in terms of a linear combination of KL divergence between two Gaussian distributions and a set of entropies. This simplifies the calculation and facilitates the implementation of the loss function. The loss function then becomes:

Further simplification and improvement were proposed by*Lo et al.*[3] in the loss function in which the mean parameterization is used as described in the previous section for the direct process. Therefore, the loss function becomes:

**5.3.** **Results**

The results of the direct process that adds Gaussian noise following a*Markov chain*can be seen in the following figure. The total number of time steps was 100, whereas this figure shows 10 samples from the generated sequence set.

The results for the reverse diffusion process can be seen in the figure below. The quality of the final result depends on the adjustment of the hyperparameters and the number of training epochs.

**6.** **conclusions**

In this article, we discuss the fundamentals of diffusion models and their implementation. Although broadcast models are computationally more expensive than other deep network architectures, they work much better in certain applications. For example, in recent applications for text and image synthesis tasks, diffusion models have outperformed other architectures [4]. More implementation details and code can be found in the following github repository:https://github.com/azad-academy/denoising-difusión-modelo.git

Subscribe and follow for more updates:azad-wolf.medium.com/

**References**

[2] Max Welling e Yee Whye Teh."Bayesian learning via langevin gradient stochastic dynamics".CIML 2011.

[4]Prafulla Dhariwal,alex nichol,*Diffusion models outperform GANs in image synthesis*, arXiv: 2105.05233

## FAQs

### What are 3 types of facilitated diffusion? ›

**Channel proteins, gated channel proteins, and carrier proteins** are three types of transport proteins that are involved in facilitated diffusion. A channel protein, a type of transport protein, acts like a pore in the membrane that lets water molecules or small ions through quickly.

**What is facilitated diffusion explanation? ›**

Facilitated diffusion is the passive movement of molecules along the concentration gradient. It is a selective process, i.e., the membrane allows only selective molecules and ions to pass through it. It, however, prevents other molecules from passing through the membrane.

**What is facilitated diffusion Quizizz? ›**

Facilitated Diffusion is **the movement of LARGE and POLAR molecules across the cell membrane with the help of Integral Proteins**.

**What is facilitated diffusion quizlet? ›**

Facilitated Diffusion. **the movement of specific molecules across cell membranes through protein channels**. Not exactly diffusion because it's a type of passive transport. Transport Proteins. protein molecules that help to transport substances throughout the body and across cell membranes.

**What are 2 examples of facilitated diffusion? ›**

Examples of biological processes that entail facilitated diffusion are **glucose and amino acid transport**, gas transport, and ion transport.

**What is facilitated diffusion AP biology? ›**

In facilitated diffusion, **molecules diffuse across the plasma membrane with assistance from membrane proteins, such as channels and carriers**. A concentration gradient exists for these molecules, so they have the potential to diffuse into (or out of) the cell by moving down it.