Version: 1.10.0
Release date: 2020-03-16
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.1
Release date: 2020-03-24
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.2
Release date: 2020-03-27
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.3
Release date: 2020-03-30
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.4
Release date: 2020-04-08
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.5
Release date: 2020-04-14
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.6
Release date: 2020-04-22
Linux64 binary | Linux64 JIT binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.7
Release date: 2020-05-23
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.8
Release date: 2020-06-05
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.9
Release date: 2020-06-15
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.10
Release date: 2020-06-22
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.11
Release date: 2020-07-02
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.12
Release date: 2020-07-20
Linux64 binary | Linux64 JIT binary | Linux64 ABI=1 binary | Windows64 binary | Windows64 JIT binary
Version: 1.10.13
Release date: 2020-08-15
Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |
Version: 1.10.14
Release date: 2020-08-31
Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |
Version: 1.10.15
Release date: 2020-09-14
Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |
Version: 1.10.16
Release date: 2020-09-27
Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |
Version: 1.10.17
Release date: 2020-10-23
Linux64 binary | Linux64 ABI=1 binary | Windows64 binary |
New Features
- When an exception is thrown, the call stack is displayed.
- Allow the limit value in the 'context by limit' statement to be negative, which means the last few rows of data of each group are selected.
- Just-in-time compilation (JIT) version added new mathematical functions: all cumulative distribution functions and their inverse functions, and functions
sinh
,cosh
,tanh
,asinh
,acosh
,atanh
,deg2rad
,rad2deg
. - Added mathematical functions:
exp2
,expm1
,log2
,log10
,log1p
,cbrt
,square
. - Added functions:
mmad
,groups
,ifirstNot
,ilastNot
,kama
andtrueRange
. (1.10.3) - Added function
segment
to divide a vector into groups. Each group is composed of identical values next to each other. For example, [1,1,2,2,1,1,1] is divided into 3 groups: [1,1], [2,2] and [1,1,1]. (1.10.4) - Can use top/limit 0 or an invalid where condition such as 1=0 in a SQL query to generate an empty table. (1.10.4)
- Added configuration parameters 'remoteHost' and 'remotePort'. If these parameters are specified, DolphinDB program can be started as a terminal for a remote server. (1.10.4)
- Allow filtering the columns of a matrix with a lambda function. (1.10.5)
- Added mathematical functions
integral
andderivative
for calculating integrals and derivatives. (1.10.6) - Added function
getEnv
to get OS environment variables. For example, under Linux environment, getEnv("PATH") will return the value of the system environment variablePATH
. (1.10.6) - Added function
conditionalFilter(X,condition,filterMap)
. The parameter 'filterMap' is a dictionary. If an element in the vector 'condition' is a key of 'filterMap', and the corresponding element in the vector 'X' is an element in the values corresponding to the key in the dictionary, return true; otherwise return false. (1.10.6) - Added function
covertExcelFormula
to convert Excel formulas into DolphinDB expressions. (1.10.7) - Added function
coevent
to count the number of event pairs that appear together within a given time interval. (1.10.7) - Added function
signum
to return the sign of a number. (1.10.7) - Added horizontal aggregation functions for processing data by rows:
rowAnd
,rowOr
,rowXor
,rowProd
. (1.10.7) - Added sliding window functions:
mwavg
andmwsum
. (1.10.7) - Added cumulative window functions:
cumavg
,cumstd
,cumvar
,cummed
,cumsum2
,cumsum3
,cumsum4
,cumwavg
,cumwsum
,cumbeta
,cumcorrr
,cumcovar
,cumpercentile
. (1.10.7) - Added cumulative window function:
cumrank
. (1.10.8) - Added function:
getChunkPath
returns the paths of the chunks that the given data sources represent. (1.10.9) - Added machine learning functions:
adaBoostClassifier
andadaBoostRegressor
. (1.10.11) - Added command closeSessions. It allows an administrator to close one or more specified sessions to release resources.(1.10.13)
- Added functions getChunksMeta and getTabletsMeta to obtain metadata of the partitions (chunks) and partitioned subtables (tablets) on the data node, such as the occupied disk space, the number of rows and the version number of each partition.(1.10.13)
- Added configuration parameter warningMemSize (in GB). The default value is 75% of maxMemSize. When memory usage exceeds warningMemSize, the system will automatically clean up the cache of some databases to avoid OOM exceptions. (1.10.16)
- Added support for client APIs to connect to the database asynchronously (the Windows version does not support it yet). (1.10.17)
- Added
delta
compression algorithm that allows users to select the compression algorithm for each column when creating a distributed table (createPartitionedTable) or a dimension table (createTable). The delta compression algorithm usually has a higher compression ratio than the lz4 algorithm on the time/date type field. (1.10.17) - Added an optional parameter "method" to function compress to specify the compression algorithm. Added function decompress for decompressing the compressed data.(1.10.17)
- Added function
repmat
to create tiling of copies of a matrix. (1.10.17)
Improvements
- The parameters 'rowIndex' and 'colIndex' of the function
slice
now support arrays. - Improved the performance of certain 'context by' statements by 5-10 times.
- Improved the performance of higher-order function
moving
by about 20%. - Improved the stability of DFS and RAFT.
- Improved the performance of "in" filtering condition when querying a keyed table with multiple keys. (1.10.1)
- An empty subarray can be obtained by specifying the same value for the starting and the ending position for the subarray in function
subarray
. For example: subarray(x, 0:0). (1.10.2) - In function
subarray
, the starting or the ending position of the subarray can now be empty. For examples: subarray(x, 2 :) or subarray(x,: 5). (1.10.2) - Parameter 'input' of function
iterate
can contain NULL values. A NULL value is treated as 0 in calculation. (1.10.3) - Improved the performance of function
iif
. In most cases, performance can be doubled. (1.10.4) - Function
loadText
supports files with carriage return ('\r') as line breaks. (1.10.4) - When using an empty string as an IP address, it no longer throws an exception, but returns an empty IP address. (1.10.4)
- For functions
char
,short
,int
,long
,float
anddouble
, if the input string is empty or not a numeric value, a Null value is returned instead of 0. (1.10.4) - If an error occurs in the execution of function
restore
, an exception will be thrown. (1.10.4) - Function
migrate
can restore all databases and tables in a backup folder. (1.10.4) - If the last character of the parameter indicating database path in functions
dropDatabase
andexistsDatabase
is a slash or a backslash, it will be automatically removed. (1.10.4) - If the input of function
rank
is an empty vector, it returns an empty vector instead of throwing an exception. (1.10.5) - When the parameter 'forceDelete' of function
dropPartition
is set to be true, partition deletion is allowed even if the number of copies of the specified partition is 0. (1.10.5) - An exception is thrown if the parameter 'partitionPaths' of function
dropPartition
indicates filtering conditions and contains a NULL value. (1.10.5) - Added the restriction that functions related to DFS database operations (including
addValuePartitions
,addRangePartitions
,append!
,createPartitionedTable
,createTable
,database
,dropDatabase
,setColumnComment
,setRetentionPolicy
, andtableInsert
) can only be executed on data nodes. (1.10.5) - If the 'if' branch or 'else' branch of an if/else statement contains illegal components, an exception will be thrown. (1.10.5)
- If the same value of the parameter 'jobId' is used repeatedly when submitting jobs with function
submitJob
, the maximum number of automatically generated job IDs prefixed with the value of 'jobId' and today's date is increased from 1,000 to 10,000. (1.10.6) - SQL update and delete statements now support scalar-based logical expressions such as 1 = 1 or 1 = 0. (1.10.7)
- In the table returned by
getStreamingStat.subWorkers
about workers of subscriber nodes, each row represents a subscription topic. (1.10.7) - When unsubscribing to a stream table (
unsubscribeTable
), all messages of the topic in the message queue of the execution thread will be deleted. (1.10.7) - If a SQL statement involves multiple partitions of a table, it is forbidden to use functions whose results are sensitive to the order of rows such as
mavg
,isDuplicated
, etc. in the 'where' clause. (1.10.7) - Function
sqlColAlias
now supports composite columns. (1.10.7) - In a SQL 'context by' or 'group by' statement, if there is an error in the calculation of an individual group due to the data (such as calculating the inversion of a singluar matrix), the result of the group is set to be Null and the statement will be executed successfully. The system no longer throws an exception to interrupt the execution. (1.10.7)
- When clearing persistent data with function
clearTablePersistence
, the system no longer prevents other functions (such asgetStreamingStat
) from accessing the persistence manager. (1.10.7) - Improved functions
rank
andmrank
. Added optional parameters 'ignoreNA' and 'tiesMethod'. 'ignoreNA' ignores null values. 'tiesMethod' determines how to rank records with the same value. Now it supports 'min', 'max' and 'average'. (1.10.8) - Improved parameter verification of function
dropPartition
. If paths of partititon contain duplicate values, an error message will be thrown. (1.10.8) - Function
convertExcelFormula
added support for Excel functions: countifs, sumifs, averageifs, minifs, maxifs, and rank. (1.10.8) - Adjusted some parameter names in functions:
nunique
,isDuplicated
,ewmMean
,ewmStd
,ewmVar
,ewmCovar
,ewmCorr
,knn
,multinomialNB
,gaussianNB
,zTest
,tTest
andfTest
to be consistent with the parameter naming conventions in DolphinDB. (1.10.8) - Improved function
run
by adding an optional parameter 'newSession'. If set to true (the default value is false), the script is executed in a new session, and the variables of the original session are not deleted. (1.10.8) - Improved the stability of DFS tables. In particular, solved an issue that repeated deletion of a partition may result in inconsistency of table versions.
- The last joining column of
aj
now support 3 more data types: uuid, ipaddr and int128. (1.10.9) - Can backup and restore dimension tables. (1.10.9)
- Added checks when
aj
orwj
uses at least one partitioned table. The joining columns except the last one must include all partitioning columns. (1.10.9) - When a time-series streaming aggregator receives new data, check the number of columns in the new data. (1.10.9)
- Can use table aliases in nested joins. (1.10.9)
- Can use aliases for dimension tables in joins. (1.10.9)
- Added an optional parameter 'minPeriods' to the higher-order function
moving
. (1.10.9) - Can add or delete columns in shared in-memory tables. (1.10.9)
- It is forbidden to directly access the fields in the table for shared in-memory table and mvcc table through
<tableName>.<colName>
. You can use the field name as an index to access table fields, such ast["col1"]
. (1.10.10) - It is forbidden to add new fields through the update statement in shared partitioned in-memory table. (1.10.10)
- Enable TCP_KEEPALIVE when creating TCP connections between nodes in the DolphinDB cluster. (1.10.10)
- The minimum cache size of a stream table is reduced from 100,000 rows to 1000 rows. (1.10.11)
- The minimum allowed value of the parameter 'throttle' of function
subscribeTable
is reduced from 1 second to 0.001 second. (1.10.11) - Function
dictUpdate!
can be applied to a dictionary with an ANY vector as the value of the dictionary. (1.10.11) - Added parameter verification to function
loadTable
. When loading a DFS table, it is not allowed to specify the partitions to load. (1.10.11) - The SQL UPDATE statement now requires that the object to be updated must be a table. (1.10.12)
- Temporal type conversion functions now support tuple as the input. The functions involved include:
date
,month
,year
,hour
,minute
,second
,time
,datetime
,datehour
,timestamp
,nanotime
,nanotimestamp
,weekday
,dayOfWeek
,dayOfYear
,dayOfMonth
,quarterOfYear
,monthOfYear
,weekOfYear
,hourOfDay
,minuteOfHour
,secondOfMinute
,millisecond
,microsecond
,nanosecond
. (1.10.12) - Improved the stability of the distributed database. Specifically, improved the stability of transaction resolution when the chunk versions are inconsistent; reduced the chances that heartbeat transmission is delayed. (1.10.12)
- The parameter 'groupingCol' of function
contextby
is allowed to be an empty array. (1.10.12) - Use
OpenBLAS
and LAPACK to improve the performance of the following matrix related functions: inverse, solve, det and cholesky. When used on a large matrix, the performance of these functions is improved by 10 to 50 times. (1.10.13) - The function
lu
can decompose a matrix that is not a square matrix. (1.10.13) - Added a label 'partitionTypeName' to the output of function
schema
to describe the partition type.(1.10.13) - Functions
in
andfind
support searching for key values in keyed tables and indexed in-memory tables.(1.10.13) - Added an optional parameter 'sharedName' to function
syncDict
. If it is specified, the dictionary will be shared across all sessions on the node. (1.10.13) - Added 2 columns ('createTime' and 'lastActiveTime') to the output of function
getSessionMemoryStat
to record the session creation time and the last access time respectively. Corrected the value of the column 'remotePort'.(1.10.13) - Added 2 columns ('remoteIP' and 'remotePort') to the output of function
getConsoleJobs
. (1.10.13) - Function
backup
now supports parallel backup to improve efficiency.(1.10.13) - When a constant is assigned to a variable, an object will be copied to avoid the reduction in system efficiency caused by concurrent modification of the reference count during multi-threaded parallel computing. (1.10.13)
- Improved the stability of the implementation of the RAFT consensus protocol.(1.10.13)
- Used TCMalloc to manage the in-memory pool, which improved in-memory allocation efficiency, especially the allocation efficiency of small in-memory during multi-threaded parallel computing. Meanwhile, two problems were solved, firstly, the situation where the actual memory occupied by DolphinDB exceeds the set value of the configuration parameter maxMemSize, and secondly, the OOM issue where there is still in-memory remaining when creating a string. (1.10.14)
- When using the function
saveText
, the maximum precision of a double type variable retains 15 digits. (1.10.14) - Imported an optional parameter 'useSystemTime' in crossSectionalAggregator. When it set to false, the output calculation time is the time of the event itself. This function could better support the playback of historical data for simulation. (1.10.14)
- Use the logarithmic form of likelihood when using gaussianNB moudle for classification prediction, that makes it possible to use the model for classification in high-dimensional situations. (1.10.14)
- Improved performance of
pca
function as changed to use the svd algorithm of lapack. (1.10.14) - Added parameter 'regularizationCoeff' to function
logisticRegression
.(1.10.14) - The configuration parameter 'dfsReplicaReliabilityLevel' now can take the value of 2, which means the replicas are distributed to different machines if resources permit. (1.10.14)
- Can subscribe to a DolphinDB stream table from external network. (1.10.15)
- The TCP_USER_TIMEOUT parameter can be enabled for the socket connection between the API and DolphinDB server. If high-availability is enabled in Linux version, after enabling TCP_USER_TIMEOUT, the client and peer server can more quickly detect the failure of the data node due to power outage. (1.10.16)
- The socket connection on the server side enables the TCP keep-alive mechanism (SO_KEEPALIVE). When the client is accidentally disconnected, it can be detected more promptly to release the connection and recover resources. (1.10.16)
- When copying a matrix, the row and column labels of the matrix are also copied. (1.10.17)
Bug Fixes
- The performance of function
backup
deteriorates after running for a period of time. - The out-of-memory problem during concurrent reading of a distributed table may cause deadlocks.
- When function
sum
oravg
is used in functioncreateTimeSeriesAggregator
and all rows in a group contain NULL values, the result should be a NULL value instead of 0. (1.10.1) - Fixed a bug in the computation of
sum
oravg
using a hash approach in SQL statements. If all rows in a group contain NULL values, the result should be a NULL value instead of 0. (1.10.1) - In Windows version of DolphinDB server, closing a client subscription would cause other subscribers on the same node to fail to accept new messages. (1.10.1)
- Fixed parsing errors for strings ending with '\\', e.g., "hello\\". It no longer throws an exception. (1.10.1)
- If a function in a module is used in a scheduled job, the module cannot be used after server restart. (1.10.1)
- In linear programming (function
linprog
), the accumulation of rounding errors in iterations may lead to incorrect results. (1.10.1) - Fixed a bug in selecting the top rows after sorting string arrays and non-string arrays sequentially. It may lead to incorrect results of function
isortTop
. (1.10.1) - The system would register duplicate module functions when a module file is executed in the console or GUI multiple times. It may lead to system crash or thrown exceptions. (1.10.1)
- Removed unnecessary output in the console in certain situations when function
slice
is applied to a matrix. (1.10.1) - Fixed a crash bug in the Windows JIT version. The system would crash if a user-defined jit function throws an exception. (1.10.1)
- Function
update!
used with multiple filtering conditions generates incorrect result. (1.10.2) - Fixed a bug that queries throw exceptions after inserting an empty table into an empty dimension table. (1.10.2)
- Fixed a bug with function
iterate
. The system may erroneously determine the parameter 'input' contains Null value, which causes parameter validation failure. (1.10.2) - Fixed a bug with function
array
. For a FLOAT or DOUBLE array, if parameter 'defaultValue' of functionarray
is set to between 0 and 0.5, the elements of the array will be erroneously assigned the value of 0. (1.10.3) - Fixed a bug introduced in version 1.10.0. When some columns in a SQL query explicitly or implicitly use the same alias, the system crashes. (1.10.3)
- Fixed a bug about using 'order by' after 'context by' or 'group by'. If the field to be sorted is already in the order specified by the user (no need to rearrange), the generated query result (an in-memory table) will continue to be used for calculation. Fields may produce incorrect results. (1.10.4)
- The result of function
trueRange
in a SQL query with 'context by' clause may be incorrect. (1.10.4) - Fixed a bug introduced in version 1.10.0. When remotely calling a partial application function in API or with function
remoteRun
, if an exception is thrown during the construction of the partial application function, the system may crash. (1.10.4) - Fixed a bug introduced in version 1.00.6. Functions
loadText
,ploadText
andloadTextEx
generate incorrect result when loading strings representing DOUBLE or FLOAT types starting with '.' or '-.'. For example, '.12' and '-.12' are incorrectly parsed as 12. (1.10.4) - Function
convertEncode
does not work in Linux version. (1.10.5) - When the parameter 'msgAsTable' of funciton
subscribeTable
is set to false, and if only one message in the new batch satisfies the filtering condition, a message that does not necessarily satisfies the filtering condition is sent to the client. (1.10.5) - The execution of aggregate functions with partitioned tables may cause error of duplicate column names. For example, if MapReduce is used in the execution of a group by statement with a partitioned table, the names of intermediate columns are "col"+number, such as "col1", "col2", etc. If a group-by column happens to have the same name as an intermediate column, an error message about duplicate column names is generated. (1.10.5)
- Function
loadText
may parse DOUBLE type as DATE type in rare cases. (1.10.5) - Fixed a memory leak bug when deleting all data of a shared in-memory table if at least one column in the table is a big array. (1.10.6)
- If a variable type cannot be determined in the JIT version, errors may occur during compilation, resulting in crashes or reduced execution efficiency. After fixing the bug, if a variable type cannot be determined, the compilation will be aborted and the variable name will be reported. (1.10.6)
- Fixed a bug introduced in version 1.10.3. If the keys of a dictionary are of LONG type and the values are of ANY type, searching the dictionary for a key may incorrectly return nothing. (1.10.6)
- Fixed a bug that may cause crash when performing equal join (ej) on two shared in-memory tables. The system may crash if one thread deletes all the data of two shared in-memory tables and then adds new data, and if another thread performs equal join on them with multiple joining columns that include at least a STRING type column. (1.10.6)
- Fixed a bug in function
createCrossSectionalAggregator
when the parameter triggeringPattern is set to "interval". The calculation is triggered not only at prescribed intervals, but also possibly every time data is inserted. (1.10.7) - Fixed a bug that may cause system crash if the parameters of partial application in a RPC call do not use the correct format. (1.10.7)
- If a SQL query with multiple OR conditions that contain both partitioning columns and non-partitioning columns in the where clause is applied on a table with value partitioning scheme, the result may contain more rows than expected. (1.10.7)
- Function
wsum
returns 0 when both parameters contain only Null values. Now it returns Null. (1.10.7) - When both parameters 'csort' and 'limit' are specified in function
sql
, the generated SQL statement cannot not find the columns specified by 'csort'. (1.10.8) - When the hash algorithm is used to execute aggregate functions in groups in SQL statements, if the result contains Null values, the system does not set a Null value flag. Therefore, if the results are further filtered with function
isNull
, the system can't detect Null values. (1.10.8) - If the hash algorithm is used to execute aggregate function
wsum
in SQL group-by calculations, and if both inputs of functionwsum
are Null, the result should be Null instead of 0. (1.10.8) - Fixed a bug introduced in 1.10.7. With multiple streaming executors, executing
getStreamingStat
will cause the system to crash. (1.10.8) - Fixed memory leak caused by allocating more than 2GB to a contiguous memory block. (1.10.9)
- When multiple batch jobs that call
mr
orimr
are running concurrently, if an exception occurs (e.g., a partition is locked by another transaction and cannot be written to), it may cause the system to crash. (1.10.9) - When the time-series aggregator performs grouping calculations with useSystemTime=true, if there is no data in the windows, calculation results are erroneously generated. (1.10.9)
- Fixed a bug with built-in concurrent hash table. This bug may cause the system to crash when creating and accessing shared variables concurrently. (1.10.9)
- A DFS database with multiple levels of directories (e.g.,
dfs://stock/valueDB
) cannot be properly backed up and restored. (1.10.10) - In equal join, if the data type of the joining column is STRING in the left table and SYMBOL in the right table, and if the right table has only 1 row, the result is incorrect in that it always return an empty table. (1.10.10)
- When joining a DFS table and a dimension table, if all the following conditions are met: (1) no records satisfy the joining conditions; (2) wildcard (*) is used in the select clause; (3) DFS table name and the table alias used in joining are different; (4) there is a column with the same name in both tables, then the system will throw an exception that it cannot find the column with the same name in both tables. (1.10.10)
- The results are erroneous when a large size dictionary is serialized asynchronously. (1.10.11)
- After enabling high availability for the controller node, if a transaction involves too many partitions so the RAFT message length exceeds 64K, the metadata will be truncated when the RAFT message is replayed after restarting the system. (1.10.12)
- If a SQL statement has the WHERE clause, and if the GROUP BY clause contains multiple fields, and if the second or a subsequent field in the GROUP BY clause uses function
segment
, the result does not match the expectation. (1.10.12) - When all elements of a vector are identical, the results of functions
mvar
andcumvar
may have extremely small negative values; the results of functionsmstd
andcumstd
may have NULL values.(1.10.13) - A memory leak may occur during socket connection. (1.10.13)
- Function
adaBoostRegressor
may crash under certain circumstances. (1.10.13) - After a high-availability cluster adds a data node online, creating a new database partition on the new node may cause the new node to crash.(1.10.13)
- When using JSON to make web calls, if the tag 'functionName' is not specified, the node will crash. This may occur when using Grafana to access DolphinDB.(1.10.14)
- When using
fromJson
function to process JSON strings, if the tag 'value' is not included, the node may crash.(1.10.14) - Fixed the bug in the implementation of snapshot checkpoint of RAFT. This may lead to a particularly time-consuming leader switching.(1.10.14)
- If the configuration parameter newValuePartitionPolicy=add (allowing the system to automatically add value partitions), when multiple concurrent writing threads add a large number of new partitions in a short period of time (usually in a stress test or development environment), partition loss may occur, i.e., the data written to the database cannot be queried.(1.10.15)
- When the new values of function
replace
orreplace!
are floating-point numbers, the fractional part will be ignored, generating incorrect results. (1.10.15) - Fixed the bug that using in-memory partitioned tables as the data source of function
mr
orimr
will cause the system to crash. (1.10.15) - When browsing data in the DFS Explorer of the web-based cluster manager, user access control is not enabled. (1.10.16)
- Using function
median
in window join may cause the system to crash if the input data contains Null values.(1.10.17) - When multiple aggregate functions are used in window join, if optimized aggregate functions (such as
avg
,sum
,min
,max
,last
,first
,med
,beta
, etc.) are located after unoptimized aggregate functions, the system will crash. (1.10.17) - Using function
cumrank
may cause the system to crash. (1.10.17) - If the last character of the input of function
split
is a separator, the last empty string is not included in the result. (1.10.17) - When a SQL query with the map clause is applied on a distributed table with a single partition, an empty table is erroneously returned. (1.10.17)
- Using function
sqlDS
to make a dimension table that has not yet been written data to as the data source for functionmr
will cause the system to crash. (1.10.17) - Fixed an occasional concurrency problem during system initialization. This problem will throw an exception similar to "No corresponding BinaryBooleanOperator defined for gt". (1.10.17)
Improvements
- Added new keywords: cgroup and map. (1.10.7)
Bug Fixes
- Fixed the bug that the row numbers of the beginning row and the ending row of the selected script for execution are not displayed correctly in the log panel.
- Plugin Source Code
- MySQL
- Fixed a bug that data containing LONGTEXT type columns cannot be imported successfully. (1.10.7)
- Released Support Vector Machine(SVM) plugin. (1.10.9)
- Released XGBoost plugin. (1.10.9)
- HDF5 Plugin
- added parameter 'transform' to function
loadHDF5Ex
to support custom data conversion. (1.10.15)
- added parameter 'transform' to function
Improvements
- Added parameter fetchSize to support transfer in blocks for query results with a large amount of data. (1.10.16)
Improvements
- Now check whether the column labels of a DataFrame are valid. (0.1.15.20)
Bug Fixes
- Uploading numpy.matrix to DolphinDB server causes a crash. (0.1.15.20)
- Check if the column labels of a DataFrame are valid.(0.1.15.20)
- Exceptions are thrown in the Python API when the session method
loadTable
is used to load specified partitions. (0.1.15.23) - Added support for ipaddr, uuid and int128 data types.(0.1.15.23)
- Added support for arrays of month type. (0.1.15.23)
- Added
hashBucket
function. (0.1.15.23) - Orca: Fixed the problem of calculation errors in
rolling
function when the input type is float32 with nan values. (0.1.15.23) - Orca: Fixed the problem of erroneous error message when
read_table
is used to load a distributed table. (0.1.15.23) - Released version 1.20.2.0 for DolphinDB 1.20.2; version 1.10.12.0 for DolphinDB 1.10.12; version 1.0.24.1 for DolphinDB 1.00.24. Please make sure to get the appropriate Python API and orca version based on the version of the DolphinDB server.
- Added support for Python 3.8. (1.20.4.0, 1.10.15.0, 1.0.24.2)
- Added native methods to create DolphinDB databases and partitioned tables. (1.20.4.0, 1.10.15.0, 1.0.24.2)
- Improved the efficiency of converting pandas dataframes to DolphinDB table objects. (1.30.0.0, 1.20.5.0, 1.10.16.0)
- Further improved the efficiency of converting pandas dataframes to DolphinDB table objects. (1.30.0.1, 1.20.6.0, 1.10.17.0)
- The Session class constructor adds optional parameters: enableSSL (encryption) and enableASYN (asynchronous), the default value is False. For example: s=ddb.Session(enableSSL=True, enableASYN=True). When enableSSL is True, the server side needs to add the enableHTTPS=true parameter (Linux64 stable version>=1.10.17, the latest version>=1.20.6) to successfully establish a connection. When asynchronous communication is enabled, only the session.run method is supported, and there is no return value, which is suitable for asynchronous writing of data.(1.30.0.1, 1.20.6.0, 1.10.17.0)
New features
- Released C++ API of Visual Studio 2017 version. (1.10.9)
Bug fixes
- Enabled TCP_KEEPALIVE to handle the following situation: The publisher has disconnected but the subscriber is not aware of it. Subsequently, the subscriber does not receive data but does not initiate reconnection. (1.10.9)
New features
- Support using semicolon (;) to separate lines of script. (1.10.9)
Improvements
- Added support for Chinese tags. Chinese tags must use UTF-8 encoding. (1.10.10)