Skip to content

Commit

Permalink
Merge branch 'release-1.145.2'
Browse files Browse the repository at this point in the history
  • Loading branch information
niksv committed Oct 31, 2023
2 parents 9a14314 + abaf26a commit 3dc60a6
Show file tree
Hide file tree
Showing 14 changed files with 482 additions and 76 deletions.
18 changes: 17 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,22 @@
# Changelog

## v1.145.1 (24/10/2023)
## v1.145.2 (30/10/2023)

### Bug Fixes:
- [#5381](https://github.com/telstra/open-kilda/pull/5381) skip haYPoint=endpoint when update periodic pings

### Improvements:
- [#5444](https://github.com/telstra/open-kilda/pull/5444) Add a check whether a migrate script completed successfully. [**configuration**]
- [#5447](https://github.com/telstra/open-kilda/pull/5447) [TEST]: 5224: Ha-Flow: Ping: Updating switch triplet selection
- [#5453](https://github.com/telstra/open-kilda/pull/5453) #5390: [TEST] Attempt to fix several flaky tests increasing waiting intervals (Issues: [#5390](https://github.com/telstra/open-kilda/issues/5390) [#5390](https://github.com/telstra/open-kilda/issues/5390)) [**tests**]
- [#5454](https://github.com/telstra/open-kilda/pull/5454) [TEST]: Server42: Isl Rtt:  Fixing refactoring issue
- [#5456](https://github.com/telstra/open-kilda/pull/5456) Store datapoints into special storage to save memory

For the complete list of changes, check out [the commit log](https://github.com/telstra/open-kilda/compare/v1.145.1...v1.145.2).

---

## v1.145.1 (26/10/2023)

### Bug Fixes:
- [#5445](https://github.com/telstra/open-kilda/pull/5445) Do not write false 'flow not found' log if monitoring is disabled
Expand Down
34 changes: 25 additions & 9 deletions docker/db-mysql-migration/migrate-develop.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,29 @@
set -e

cd /liquibase/changelog
liquibase \
--headless=true --defaultsFile=/liquibase/liquibase.docker.properties \
--username="${KILDA_MYSQL_USER}" \
--password="${KILDA_MYSQL_PASSWORD}" \
--url="${KILDA_MYSQL_JDBC_URL}" \
update --changelog-file="root.yaml"

echo "All migrations have been applied/verified"
rm -f /kilda/flag/migration.*

echo "******\nStart liquibase update using URL: ${KILDA_MYSQL_JDBC_URL}\n******"

if ! liquibase \
--headless=true --defaultsFile=/liquibase/liquibase.docker.properties \
--username="${KILDA_MYSQL_USER}" \
--password="${KILDA_MYSQL_PASSWORD}" \
--url="${KILDA_MYSQL_JDBC_URL}" \
update --changelog-file="root.yaml";
then
echo "******\nmigrate-develop.sh: DB migrations failure.\n******"
exit 1
fi

echo "******\nmigrate-develop.sh: All migrations have been applied/verified.\n******"
touch /kilda/flag/migration.ok
exec sleep infinity
if [ -z "${NO_SLEEP}"]
then
echo "Set sleep infinity"
exec sleep infinity
else
echo "The migrate script completed"
exit 0
fi

32 changes: 16 additions & 16 deletions docker/db-mysql-migration/migrations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ chunk into `root.yaml`
file: 001-feature-ABC.yaml
```

Tag for rollback operation (during rollback everithing that was applied after this tag will be rolled back)
Tag for rollback operation (during rollback everything that was applied after this tag will be rolled back)
```yaml
changeSet:
id: tag-for-some-migration
Expand All @@ -44,31 +44,31 @@ changeSet:
tag: 000-migration
```

To start DB update by hands you need to build migration container
To start DB update manually you need to compose a migration image and execute a migration script. Optionally, you
can execute liquibase with arbitrary parameters.

To create an image, navigate to (TODO)
```shell script
docker-compose build db_mysql_migration
```

And execute following command (for DB on some foreign host):
For executing a migration script (you can override other environment variables as well). `NO_SLEEP` parameter will exit the
script normally, otherwise it will sleep infinitely to preserve the container running:
```shell script
docker run \
--volume=/etc/resolv.conf:/etc/resolv.conf --rm --network=host \
-e INSTALL_MYSQL=true \
open-kilda_db_mysql_migration:latest \
--username="kilda" \
--password="password" \
--url="jdbc:mysql://mysql.pendev/kilda" \
update --changelog-file="root.yaml"
docker run --volume=/etc/resolv.conf:/etc/resolv.conf --rm --network=host \
-e KILDA_MYSQL_JDBC_URL="jdbc:mysql://localhost:8101/kilda" \
-e NO_SLEEP=true \
--entrypoint=/kilda/migrate-develop.sh \
kilda/db_mysql_migration:latest
```

For rollback changes up to some specific tag, execute command
For executing liquibase manually, for example for rolling back changes up to some specific tag, execute the following command:
```shell script
docker run \
--volume=/etc/resolv.conf:/etc/resolv.conf --rm --network=host \
-e INSTALL_MYSQL=true \
open-kilda_db_mysql_migration:latest \
kilda/db_mysql_migration:latest \
--username="kilda" \
--password="password" \
--url="jdbc:mysql://mysql.pendev/kilda" \
--password="kilda" \
--url="jdbc:mysql://localhost:8101/kilda" \
rollback --changelog-file="root.yaml" --tag="some-specific-tag"
```
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,9 @@
package org.openkilda.wfm.topology.opentsdb.bolts;

import org.openkilda.messaging.info.Datapoint;
import org.openkilda.wfm.topology.opentsdb.models.Storage;
import org.openkilda.wfm.topology.opentsdb.models.Storage.DatapointValue;

import lombok.Value;
import org.apache.storm.Config;
import org.apache.storm.Constants;
import org.apache.storm.task.OutputCollector;
Expand All @@ -30,7 +31,6 @@
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.TimeUnit;

Expand All @@ -43,7 +43,7 @@ public class OpenTsdbFilterBolt extends BaseRichBolt {

public static final Fields STREAM_FIELDS = new Fields(FIELD_ID_DATAPOINT);

private Map<DatapointKey, Datapoint> storage = new HashMap<>();
private final Storage storage = new Storage();
private OutputCollector collector;

@Override
Expand All @@ -63,11 +63,14 @@ public void execute(Tuple tuple) {

if (isTickTuple(tuple)) {
// opentsdb using current epoch time (date +%s) in seconds
long now = System.currentTimeMillis();
storage.entrySet().removeIf(entry -> now - entry.getValue().getTime() > MUTE_IF_NO_UPDATES_MILLIS);
int initialSize = storage.size();
storage.removeOutdated(MUTE_IF_NO_UPDATES_MILLIS);
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Removed {} outdated datapoints from the storage", initialSize - storage.size());
}

if (LOGGER.isTraceEnabled()) {
LOGGER.trace("storage after clean tuple: {}", storage.toString());
LOGGER.trace("storage after clean tuple: {}", storage);
}

collector.ack(tuple);
Expand Down Expand Up @@ -100,27 +103,27 @@ public void declareOutputFields(OutputFieldsDeclarer declarer) {
private void addDatapoint(Datapoint datapoint) {
LOGGER.debug("adding datapoint: {}", datapoint);
LOGGER.debug("storage.size: {}", storage.size());
storage.put(new DatapointKey(datapoint.getMetric(), datapoint.getTags()), datapoint);
storage.add(datapoint);
if (LOGGER.isTraceEnabled()) {
LOGGER.trace("addDatapoint storage: {}", storage.toString());
LOGGER.trace("addDatapoint storage: {}", storage);
}
}

private boolean isUpdateRequired(Datapoint datapoint) {
boolean update = true;
Datapoint prevDatapoint = storage.get(new DatapointKey(datapoint.getMetric(), datapoint.getTags()));
DatapointValue prevDatapointValue = storage.get(datapoint);

if (prevDatapoint != null) {
if (prevDatapointValue != null) {
if (LOGGER.isTraceEnabled()) {
LOGGER.trace("prev: {} cur: {} equals: {} time_delta: {}",
prevDatapoint,
prevDatapointValue,
datapoint,
prevDatapoint.getValue().equals(datapoint.getValue()),
datapoint.getTime() - prevDatapoint.getTime()
prevDatapointValue.getValue().equals(datapoint.getValue()),
datapoint.getTime() - prevDatapointValue.getTime()
);
}
update = !prevDatapoint.getValue().equals(datapoint.getValue())
|| datapoint.getTime() - prevDatapoint.getTime() >= MUTE_IF_NO_UPDATES_MILLIS;
update = !prevDatapointValue.getValue().equals(datapoint.getValue())
|| datapoint.getTime() - prevDatapointValue.getTime() >= MUTE_IF_NO_UPDATES_MILLIS;
}
return update;
}
Expand All @@ -136,12 +139,4 @@ private boolean isTickTuple(Tuple tuple) {
private Values makeDefaultTuple(Datapoint datapoint) {
return new Values(datapoint);
}

@Value
private static class DatapointKey {

private String metric;

private Map<String, String> tags;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
/* Copyright 2023 Telstra Open Source
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.openkilda.wfm.topology.opentsdb.models;

import org.openkilda.messaging.info.Datapoint;

import com.google.common.annotations.VisibleForTesting;
import lombok.ToString;
import lombok.Value;

import java.io.Serializable;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
import java.util.Optional;
import java.util.SortedMap;
import java.util.TreeMap;

@ToString
public class Storage implements Serializable {
public static final String NULL_KEY = "null";
public static final char TAG_KEY_DELIMITER = '_';
public static final char TAG_VALUE_DELIMITER = ':';
public static final String NULL_TAG = "NULL_TAG";

private final Map<String, DatapointValue> map;

public Storage() {
this.map = new HashMap<>();
}

public void add(Datapoint datapoint) {
map.put(createKey(datapoint), createValue(datapoint));
}

public DatapointValue get(Datapoint datapoint) {
return map.get(createKey(datapoint));
}

public void removeOutdated(long ttlInMillis) {
long now = System.currentTimeMillis();
map.entrySet().removeIf(entry -> now - entry.getValue().getTime() > ttlInMillis);
}

public int size() {
return map.size();
}

@VisibleForTesting
static String createKey(Datapoint datapoint) {
if (datapoint == null) {
return NULL_KEY;
}
StringBuilder key = new StringBuilder();
key.append(datapoint.getMetric());

if (datapoint.getTags() != null) {
SortedMap<String, String> sortedTags = getSortedTags(datapoint);
for (Entry<String, String> entry : sortedTags.entrySet()) {
key.append(TAG_KEY_DELIMITER);
key.append(entry.getKey());
key.append(TAG_VALUE_DELIMITER);
key.append(entry.getValue());
}
}
return key.toString();
}

private static DatapointValue createValue(Datapoint datapoint) {
if (datapoint == null) {
return null;
}
return new DatapointValue(datapoint.getValue(), datapoint.getTime());
}

private static SortedMap<String, String> getSortedTags(Datapoint datapoint) {
SortedMap<String, String> sortedTags = new TreeMap<>();
for (Entry<String, String> entry : datapoint.getTags().entrySet()) {
String key = Optional.ofNullable(entry.getKey()).orElse(NULL_TAG);
sortedTags.put(key, entry.getValue());
}
return sortedTags;
}

@Value
public static class DatapointValue implements Serializable {
Number value;
Long time;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
* limitations under the License.
*/

package org.openkilda.wfm.topology.opentsdb.bolt;
package org.openkilda.wfm.topology.opentsdb.bolts;

import static java.util.Collections.singletonMap;
import static org.apache.storm.Constants.SYSTEM_COMPONENT_ID;
Expand All @@ -30,7 +30,6 @@

import org.openkilda.messaging.info.Datapoint;
import org.openkilda.messaging.info.InfoData;
import org.openkilda.wfm.topology.opentsdb.bolts.OpenTsdbFilterBolt;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.tuple.Tuple;
Expand Down
Loading

0 comments on commit 3dc60a6

Please sign in to comment.