-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[de] cs-229-deep-learning #106
Conversation
Thank you for your work @nanophilip! Just realized your translation was now ready for review. Please feel free to invite native speakers you may know who could go over your work. |
de/cheatsheet-deep-learning.md
Outdated
@@ -60,7 +60,7 @@ | |||
|
|||
**11. Learning rate ― The learning rate, often noted α or sometimes η, indicates at which pace the weights get updated. This can be fixed or adaptively changed. The current most popular method is called Adam, which is a method that adapts the learning rate.** | |||
|
|||
⟶ Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Schnelligkeit die Gewichtungen aktualisiert werden. Die Lernrate kann konstant oder anpassend variierend sein. Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst. | |||
⟶ Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Rate die Gewichtungen aktualisiert werden. Die Lernrate kann konstant oder anpassend variierend sein. Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gewichtungen but: Gewichte
*) Die Lernrate kann konstant oder anpassend variierend sein. dynamisch angepasst werden.
*) Die aktuell populärste Methode, Adam, ist eine Methode die die Lernrate anpasst. Am häufigsten wird die Methode Adam benutzt, welche die Lernrate dynamisch aktualisiert.
So final correct one:
Lernrate - Die Lernrate, oft mit α oder manchmal mit η bezeichnet, gibt an mit welcher Rate die Gewichte aktualisiert werden. Die Lernrate kann konstant oder dynamisch angepasst werden. Am häufigsten wird die Methode Adam benutzt, welche die Lernrate dynamisch aktualisiert.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good, thank you! Perhaps also "die Adam-Methode" insetad of "die Methode Adam"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adam-Methods sounds nicer, indeed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now "Die aktuell populärste Methode, Adam, aktualisiert die Lernrate dynamisch." Good?
de/cheatsheet-deep-learning.md
Outdated
|
||
<br> | ||
|
||
**23. It is usually done after a fully connected/convolutional layer and before a non-linearity layer and aims at allowing higher learning rates and reducing the strong dependence on initialization.** | ||
|
||
⟶ | ||
⟶ Geschieht üblicherweise nach einer vollständig verbundenen/faltenden Schicht und vor einer nicht-linearen Schicht und bezweckt eine höhere Lernrate und eine Reduzierung der starken Abhängigkeit von der Initialisierung. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Geschieht Wird üblicherweise nach einer vollständig verbundenen kompletten/faltenden und vor einer nicht-linearen Schicht durchgeführt und bezweckt eine höhere die Erhöhung der Lernrate und eine Reduzierung der starken Abhängigkeit vom initialen Wert der Lernrate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is "komplett" the same as / known as / used as "vollständig verbunden" in the literature?
de/cheatsheet-deep-learning.md
Outdated
|
||
<br> | ||
|
||
**26. [Input gate, forget gate, gate, output gate]** | ||
|
||
⟶ [Eingangsgatter, Vergißgatter, Gatter, Ausgangsgatter] | ||
⟶ [Eingangstor, Vergesstor, Gatter, Ausgangstor] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Input-Gatter, Forget-Gatter, Speicher-Gatter, Output-Gatter] for [Input gate, forget gate, gate, output gate]
NOTE: the current version of the english pdf on git has a different order: [Input gate, Forget gate, Output gate, Gate) instead of [Input gate, forget gate, gate, output gate]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have stuck to German words ("Eingangsgatter" instead of "Input-Gatter"). Acceptable? Also, is "Gate" "Speichergatter"?
@nanophilip: feel free to review mine (ML tips and tricks) |
Thanks for the work on the translation as well - will be valuable to students! I just went quickly over it and didn't want to comment in detail before clarifying: you tried to translate basically everything - I tend to use more and more of the English terminology even for beginners as this might help them when they get more and more accustomed with the subject and start to dig deeper into it. Is there a consensus on this? Personally, I would prefer to use some English terms as Backpropagation and when mentioned the first time provide a translation in brackets. Or for bias - Vorspannung just doesn't feel like something they should really remember ... |
Thank you for having a look at the translation and your suggestions. In general, I dislike using English words if German equivalents exist. But I understand your reasoning and agree that a student would be well off knowning some English terminology as well. How about using German words in the German translation and providing some important original English terms - like backpropagation or bias - in parentheses? |
Thanks for the comments, I agree with your suggestions! |
Thanks everyone for your work and comments. @nanophilip providing english translations of important English concepts indeed sounds like a great idea! Please feel free to write them down in your translation file where applicable. |
Thank you @nanophilip and @bb08 for all your hard work! Moving forward with the merge! |
No description provided.