Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory keeps increasing when calling tf.signal.rfft in a loop #76

Open
CassiniHuy opened this issue Jan 8, 2023 · 2 comments
Open

memory keeps increasing when calling tf.signal.rfft in a loop #76

CassiniHuy opened this issue Jan 8, 2023 · 2 comments

Comments

@CassiniHuy
Copy link

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below): 1.15.5+nv22.12
  • Python version: 3.8
  • CUDA/cuDNN version: 12.0
  • GPU model and memory: RTX 4090, 24GB

Describe the current behavior
When calling tf.signal.rfft iteratively in a loop, the memory and time cost unexpectedly increase gradually.

Describe the expected behavior
The memory usage should keep constant and the time cost should keep stable.

Code to reproduce the issue

import tensorflow as tf
import time
import psutil

graph = tf.Graph()
with graph.as_default():
    frames = tf.random.uniform((144, 400))
    spec = tf.signal.rfft(frames, [512, ])

with tf.Session(graph=graph) as session:
    session.run(tf.global_variables_initializer())
    session.graph.finalize()
    for i in range(10000):
        stime = time.time()

        session.run(spec)       # rfft computation

        etime = time.time()
        elapsed = etime - stime
        if i % 500 == 0:
            print(f'{i:05} Time: {elapsed}')
            used_mem = psutil.virtual_memory().used                     # memory usage
            print(f"{i:05} Memory used: {used_mem / 1024 / 1024} Mb\n") # time cost

Other info / logs

The output of the above code:

00000 Time: 0.04090452194213867
00000 Memory used: 1823.8671875 Mb

00500 Time: 0.0012400150299072266
00500 Memory used: 1833.21875 Mb

01000 Time: 0.0013303756713867188
01000 Memory used: 1845.03125 Mb

01500 Time: 0.001249551773071289
01500 Memory used: 1856.59765625 Mb

02000 Time: 0.001224517822265625
02000 Memory used: 1868.40234375 Mb

02500 Time: 0.0013511180877685547
02500 Memory used: 1880.21484375 Mb

03000 Time: 0.0013630390167236328
03000 Memory used: 1892.02734375 Mb

03500 Time: 0.0014331340789794922
03500 Memory used: 1904.0859375 Mb

04000 Time: 0.0024590492248535156
04000 Memory used: 1915.65234375 Mb

04500 Time: 0.001255035400390625
04500 Memory used: 1927.46484375 Mb

05000 Time: 0.001493215560913086
05000 Memory used: 1939.0234375 Mb

05500 Time: 0.0015327930450439453
05500 Memory used: 1951.08203125 Mb
  • The issue also happens when I use the NGC container by docker.
@CassiniHuy CassiniHuy changed the title tf.signal.rfft memory keeps increasing during loop on RTX 4090 memory keeps increasing when calling *tf.signal.rfft* in a loop Jan 8, 2023
@CassiniHuy CassiniHuy changed the title memory keeps increasing when calling *tf.signal.rfft* in a loop memory keeps increasing when calling tf.signal.rfft in a loop Jan 8, 2023
@CassiniHuy
Copy link
Author

It also happens for other signal processing methods like tf.signal.fft, tf.signal.dct

@CassiniHuy
Copy link
Author

  • I found the problem happens when tf.signal.rfft receives a Variable type or Tensor created by tf.random. But it disappear when it receives inputs of const Tensor and numpy tensor.
  • I uninstall the tensorflow 1.15.5 version and reinstall the 1.15.4 version, and the problem disappers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant