Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[de] cs-229-deep-learning #106

Merged
merged 10 commits into from
Apr 23, 2020
Merged

[de] cs-229-deep-learning #106

merged 10 commits into from
Apr 23, 2020

Conversation

nanophilip
Copy link
Contributor

No description provided.

@shervinea shervinea added the in progress Work in progress label Jan 7, 2019
@shervinea shervinea added reviewer wanted Looking for a reviewer and removed in progress Work in progress labels Feb 24, 2019
@shervinea
Copy link
Owner

Thank you for your work @nanophilip! Just realized your translation was now ready for review. Please feel free to invite native speakers you may know who could go over your work.

de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
@@ -60,7 +60,7 @@

**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.**

⟶ Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Schnelligkeit die Gewichtungen aktualisiert werden. Die Lernrate kann konstant oder anpassend variierend sein. Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst.
⟶ Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Rate die Gewichtungen aktualisiert werden. Die Lernrate kann konstant oder anpassend variierend sein. Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gewichtungen but: Gewichte

*) Die Lernrate kann konstant oder anpassend variierend sein. dynamisch angepasst werden.
*) Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst. Am häufigsten wird die Methode Adam benutzt, welche die Lernrate dynamisch aktualisiert.

So final correct one:
Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Rate die Gewichte aktualisiert werden. Die Lernrate kann konstant oder dynamisch angepasst werden. Am häufigsten wird die Methode Adam benutzt, welche die Lernrate dynamisch aktualisiert.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good, thank you! Perhaps also "die Adam-Methode" insetad of "die Methode Adam"?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adam-Methods sounds nicer, indeed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now "Die aktuell populärste Methode, Adam, aktualisiert die Lernrate dynamisch." Good?

de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved

<br>

**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.**

&#10230;
&#10230; Geschieht üblicherweise nach einer vollständig verbundenen/faltenden Schicht und vor einer nicht-linearen Schicht und bezweckt eine höhere Lernrate und eine Reduzierung der starken Abhängigkeit von der Initialisierung.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Geschieht Wird üblicherweise nach einer vollständig verbundenen kompletten/faltenden und vor einer nicht-linearen Schicht durchgeführt und bezweckt eine höhere die Erhöhung der Lernrate und eine Reduzierung der starken Abhängigkeit vom initialen Wert der Lernrate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "komplett" the same as / known as / used as "vollständig verbunden" in the literature?

de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved

<br>

**26. [Input gate, forget gate, gate, output gate]**

&#10230; [Eingangsgatter, Vergißgatter, Gatter, Ausgangsgatter]
&#10230; [Eingangstor, Vergesstor, Gatter, Ausgangstor]
Copy link

@bb08 bb08 Apr 8, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Input-Gatter, Forget-Gatter, Speicher-Gatter, Output-Gatter] for [Input gate, forget gate, gate, output gate]

NOTE: the current version of the english pdf on git has a different order: [Input gate, Forget gate, Output gate, Gate) instead of [Input gate, forget gate, gate, output gate]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have stuck to German words ("Eingangsgatter" instead of "Input-Gatter"). Acceptable? Also, is "Gate" "Speichergatter"?

de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
de/cheatsheet-deep-learning.md Outdated Show resolved Hide resolved
@bb08
Copy link

bb08 commented Apr 9, 2019

@nanophilip: feel free to review mine (ML tips and tricks)

@malteschilling
Copy link

Thanks for the work on the translation as well - will be valuable to students! I just went quickly over it and didn't want to comment in detail before clarifying: you tried to translate basically everything - I tend to use more and more of the English terminology even for beginners as this might help them when they get more and more accustomed with the subject and start to dig deeper into it. Is there a consensus on this? Personally, I would prefer to use some English terms as Backpropagation and when mentioned the first time provide a translation in brackets. Or for bias - Vorspannung just doesn't feel like something they should really remember ...

@nanophilip
Copy link
Contributor Author

Thanks for the work on the translation as well - will be valuable to students! I just went quickly over it and didn't want to comment in detail before clarifying: you tried to translate basically everything - I tend to use more and more of the English terminology even for beginners as this might help them when they get more and more accustomed with the subject and start to dig deeper into it. Is there a consensus on this? Personally, I would prefer to use some English terms as Backpropagation and when mentioned the first time provide a translation in brackets. Or for bias - Vorspannung just doesn't feel like something they should really remember ...

Thank you for having a look at the translation and your suggestions. In general, I dislike using English words if German equivalents exist. But I understand your reasoning and agree that a student would be well off knowning some English terminology as well. How about using German words in the German translation and providing some important original English terms - like backpropagation or bias - in parentheses?

@bb08
Copy link

bb08 commented Aug 7, 2019

Thanks for the comments, I agree with your suggestions!

@shervinea
Copy link
Owner

Thanks everyone for your work and comments.

@nanophilip providing english translations of important English concepts indeed sounds like a great idea! Please feel free to write them down in your translation file where applicable.

@shervinea
Copy link
Owner

Thank you @nanophilip and @bb08 for all your hard work! Moving forward with the merge!

@shervinea shervinea merged commit fb73020 into shervinea:master Apr 23, 2020
@shervinea shervinea changed the title [de] Deep learning [de] cs-229-deep-learning Oct 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reviewer wanted Looking for a reviewer
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants