Using Transfer Learning you should freeze some layers, mainly the pre-trained ones and only train in the added ones,
In Transfer Learning or Domain Adaptation, we train the model with a dataset. Then, we train the same model with another dataset that has a different distribution of classes, or even with other classes than in the first training dataset
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning". Link
Fine-tuning means taking weights of a trained neural network and use it as initialization for a new model being trained on data from the same domain (often e.g. images).