How Does Dimensional Regularisation Lead to a Finite Perturbation Expansion?

  • Thread starter latentcorpse
  • Start date
  • Tags
    Homework
However, it is worth noting that the functional integrals discussed here are different from the regular integrals used in regularisation and renormalisation processes. To reconcile this difference, further investigation and analysis is required. In summary, the use of functional integrals with dimensional regularisation allows us to identify necessary counter terms and remove divergences in order to obtain a finite perturbation expansion for a four-dimensional quantum field theory. Further exploration is needed to address the discrepancies between functional and regular integrals in this context.
  • #1
latentcorpse
1,444
0
I am trying question 3 in this paper:
http://www.maths.cam.ac.uk/postgrad/mathiii/pastpapers/2006/Paper49.pdf

I am on the bit "Describe in outline how the functional integral with dimensional
regularisation leads to a finite perturbation expansion for the four dimensional quantum
field theory in terms of the action..."

Now I was just going to describe how we can use dimensional regularisation to identify the poles arising from loop integrals we come across when working out greens functions and from that we can deduce what the necessary counter terms are for use in renormalisation. after we add the counter terms and work everything out the the bare lagrangian [itex]\mathcal{L}_\text{B}=\mathcal{L}+\mathcal{L}_{\text{counter}}[/itex] all divergences are removed and the integral is rendered finite.

However, it is talking about functional integrals and the integrals we have in regularsation/renormalisation are normal integrals. what do you reckon i have to do?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
In order to answer this question, you will need to familiarize yourself with the concept of a functional integral. A functional integral is a type of integral that can be used to compute the expectation value of a quantum field theory. The idea is that, instead of performing a regular integral over all possible states of the fields, one instead performs an integral over all possible field configurations. This allows us to capture the effects of the interactions between different fields in the theory. Once we have a functional integral for a given theory, we can use dimensional regularisation to identify the poles arising from loop integrals. This allows us to determine the necessary counter terms that are required to renormalise the theory. By adding these counter terms to the bare Lagrangian, we can remove all divergences and render the integral finite. This then gives us the perturbation expansion for the four-dimensional quantum field theory.
 

Related to How Does Dimensional Regularisation Lead to a Finite Perturbation Expansion?

1. What is regularisation and why is it important in machine learning?

Regularisation is a technique used in machine learning to prevent overfitting of a model. It involves adding a penalty term to the cost function which helps to reduce the complexity of the model and avoid high variance. Regularisation is important because it helps to improve the generalisation ability of the model, making it perform better on new data.

2. What are the different types of regularisation methods?

There are two main types of regularisation methods: L1 and L2 regularisation. L1 regularisation, also known as Lasso, adds the absolute value of the coefficients to the cost function. L2 regularisation, also known as Ridge, adds the squared value of the coefficients to the cost function. Other types of regularisation include Elastic Net and Dropout.

3. How does regularisation prevent overfitting?

Regularisation prevents overfitting by penalising large values of the model's coefficients. This encourages the model to favour simpler and more generalised solutions, rather than complex and overfitted ones. Regularisation also helps to reduce the effect of noise in the data, which can lead to overfitting.

4. What is the trade-off between bias and variance in regularisation?

Bias refers to the error caused by the model's assumptions and simplifications, while variance refers to the sensitivity of the model to small changes in the training data. Regularisation helps to reduce variance by controlling the complexity of the model, but it can also introduce some bias. The trade-off between bias and variance needs to be carefully balanced to find the optimal model.

5. How do you choose the best regularisation parameter for a model?

The best regularisation parameter can be chosen through techniques such as cross-validation or grid search. Cross-validation involves splitting the data into training and validation sets, and using different values of the regularisation parameter to train and evaluate the model. Grid search involves testing a range of regularisation parameter values and selecting the one that results in the best performance on a validation set.

Back
Top