Skip to content

Commit

Permalink
Merge pull request #1183 from pyvideo/remove-unused-attributes
Browse files Browse the repository at this point in the history
Remove unused attributes
  • Loading branch information
jonafato authored Aug 13, 2024
2 parents 5976c18 + 2d9d774 commit a0e666d
Show file tree
Hide file tree
Showing 261 changed files with 38 additions and 334 deletions.
4 changes: 1 addition & 3 deletions chipy/videos/15260.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"quality_notes": null,
"recorded": "2023-05-11T19:00:00",
"slug": "Learning_Sprint_An_Experiment",
"source_url": "https://youtu.be/JIMSp2Vqjgc",
"speakers": [
"Eve Qiao",
"Ray Berg"
Expand Down Expand Up @@ -35,6 +34,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/223/"
}
],
"veyepar_state": 10
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15261.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"quality_notes": null,
"recorded": "2023-05-11T19:40:00",
"slug": "Micropython_gpio",
"source_url": "https://youtu.be/6wc452U2Gzw",
"speakers": [
"Andrew Wingate"
],
Expand Down Expand Up @@ -37,6 +36,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/223/"
}
],
"veyepar_state": 10
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15262.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"quality_notes": null,
"recorded": "2023-05-11T20:10:00",
"slug": "Ellipses_and_Arcane_Syntax",
"source_url": "https://youtu.be/fP3okTK49dI",
"speakers": [
"Phil Robare"
],
Expand All @@ -34,6 +33,5 @@
"label": "conf",
"url": "http://www.chipy.org/"
}
],
"veyepar_state": 10
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15266.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"quality_notes": null,
"recorded": "2023-06-08T18:52:00",
"slug": "JSON_Web_Tokens_for_Fun_and_Profit",
"source_url": "https://youtu.be/gyUNW9Zkwv0",
"speakers": [
"Heather White"
],
Expand All @@ -34,6 +33,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/228/"
}
],
"veyepar_state": 10
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15267.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@
"quality_notes": null,
"recorded": "2023-06-08T19:30:59",
"slug": "Exploring_the_Python_Run_Time_Environment",
"source_url": "https://youtu.be/ATSc5aLPSOc",
"speakers": [
"Alexander Leopold Shon"
],
Expand All @@ -34,6 +33,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/228/"
}
],
"veyepar_state": 10
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15344.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-01-11T18:30:00",
"slug": "Whats_in_your_AI_code_Learn_why_every_SCA_tool_is_wrong_and_how_to_deal_with_it",
"source_url": "https://youtu.be/HDT9K5rGvWo",
"speakers": [
"Anand Sawant"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/240/"
}
],
"veyepar_state": 8
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15345.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-01-11T19:05:00",
"slug": "must_use_correc_snek_python_for_Debian_and_derivatives",
"source_url": "https://youtu.be/sHMKigxHBVA",
"speakers": [
"Heather White"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/240/"
}
],
"veyepar_state": 8
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15346.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-01-11T19:35:00",
"slug": "BluPants_opensource_educational_Python_bots",
"source_url": "https://youtu.be/ZGXPW248azA",
"speakers": [
"Marcelo Sacchetin"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/240/"
}
],
"veyepar_state": 8
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15352.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-04-11T18:30:00",
"slug": "Intro_to_PropertyBased_Testing_with_Hypothesis",
"source_url": "https://youtu.be/bhRTEyGTRU0",
"speakers": [
"Paul Zuradzki"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/247/"
}
],
"veyepar_state": 5
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15353.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-04-11T19:10:00",
"slug": "Exploring_Cellular_Automata_in_Python_using_Golly",
"source_url": "https://youtu.be/cnG14Ue_B3w",
"speakers": [
"Joshua Herman"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/247/"
}
],
"veyepar_state": 5
]
}
4 changes: 1 addition & 3 deletions chipy/videos/15424.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
"quality_notes": null,
"recorded": "2024-06-13T18:30:00",
"slug": "Python_The_Language_for_Understanding_and_Building_the_Future_of_AI",
"source_url": "https://youtu.be/4f8rlX8J4_s",
"speakers": [
"Paul Ebreo"
],
Expand All @@ -33,6 +32,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/250/"
}
],
"veyepar_state": 6
]
}
3 changes: 1 addition & 2 deletions chipy/videos/15425.json
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,5 @@
"label": "conf",
"url": "https://www.chipy.org/meetings/250/"
}
],
"veyepar_state": 6
]
}
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "This talk discusses Apache Arrow project and how it already interacts\nwith the Python ecosystem.\n\nThe Apache Arrow project specifies a standardized language-independent\ncolumnar memory format for flat and nested data, organized for efficient\nanalytic operations on modern hardware. On top of that standard, it\nprovides computational libraries and zero-copy streaming messaging and\ninterprocess communication protocols, and as such, it provides a\ncross-language development platform for in-memory data. It has support\nfor many languages, including C, C++, Java, JavaScript, MATLAB, Python,\nR, Rust, ..\n\nThe Apache Arrow project, although still in active development, has\nalready several applications in the Python ecosystem. For example, it\nprovides the IO functionality for pandas to read the Parquet format (a\ncolumnar, binary file format used a lot in the Hadoop ecosystem). Thanks\nto the standard memory format, it can help improve interoperability\nbetween systems, and this is already seen in practice for the Spark /\nPython interface, by increasing the performance of PySpark. Further, it\nhas the potential to provide a more performant string data type and\nnested data types (like dicts or lists) for Pandas dataframes, which is\nalready being experimented with in the fletcher package (using the\npandas ExtensionArray interface).\n\nApache Arrow, defining a columnar, in-memory data format standard and\ncommunication protocols, provides a cross-language development platform\nwith already several applications in the PyData ecosystem.\n",
"duration": 1789,
"language": "eng",
"published_at": "2019-10-27T16:48:54.000Z",
"recorded": "2019-09-04",
"speakers": [
"Joris Van den Bossche"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "| Jupyter notebooks are often a mess. The code produced is working for\n one notebook, but it's hard to maintain or to re-use.\n| In this talks I will present some best practices to make code more\n readable, better to maintain and re-usable.\n\n| This will include:\n| - versioning best practices\n| - how to use submodules\n| - coding methods to avoid (e.g. closures)\n\nJupyter notebooks are often a mess. The code produced is working for one\nnotebook, but it's hard to maintain or to re-use. In this talks I will\npresent some best practices to make code more readable, better to\nmaintain and re- usable.\n",
"duration": 850,
"language": "eng",
"published_at": "2019-10-27T17:38:59.000Z",
"recorded": "2019-09-04",
"speakers": [
"Alexander CS Hendorf"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "We are lucky there are very diverse solutions to make Python faster that\nhave been in use for a while: from wrapping compiled languages (NumPy),\nto altering the Python syntax to make it more suitable to compilers\n(Cython), to using a subset of it which can in turn be accelerated\n(numba). However, each of these options has a tradeoff, and there is no\nsilver bullet.\n\npoliastro is a library for Astrodynamics written in pure Python. All its\ncore algorithms are accelerated with numba, which allows poliastro to be\ndecently fast while having minimal code complexity and avoid using other\nlanguages.\n\nHowever, even though numba is quite mature as a library and most of the\nPython syntax and NumPy functions are supported, there are still some\nlimitations that affect its usage. In particular, we strive to offer a\nhigh-level API with support for physical units and reusable functions\nwhich can be passed as arguments, which sometimes require using complex\nobjects or introspective Python behavior which is not available.\n\nIn this talk we will discuss the strategies and workarounds we have\ndeveloped to overcome these problems, and what advanced numba features\nwe can leverage.\n\nThere are several solutions to make Python faster, and choosing one is\nnot easy: we would want it to be fast without sacrificing its\nreadability and high-level nature. We tried to do it for an\nAstrodynamics library using numba. How did it turn out?\n",
"duration": 893,
"language": "eng",
"published_at": "2020-03-06T17:39:14.000Z",
"recorded": "2019-09-05",
"speakers": [
"Juan Luis Cano Rodr\u00edguez"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "Caterva: A Compressed And Multidimensional Container For Big Data\n=================================================================\n\n`Caterva <https://github.com/Blosc/Caterva>`__ is a C library on top of\n`C-Blosc2 <https://github.com/Blosc/c-blosc2>`__ that implements a\nsimple multidimensional container for compressed binary data. It adds\nthe capability to store, extract, and transform data in these\ncontainers, either in-memory or on-disk.\n\nWhile there are several existing solutions for this scenario (HDF5 is\none of the most known), Caterva brings novel features that, when taken\ntoghether, set it appart from them:\n\n- **Leverage important features of C-Blosc2**. C-Blosc2 is the next\n generation of the well-know, high performance C-Blosc compression\n library (see below for a more in-depth description).\n\n- **Fast and seamless interface with the compression engine**. While in\n other solutions compression seems an after-thought and can implies\n several copies of buffers internally, the interface of Caterva and\n C-Blosc2 (its internal compression engine) is meant to be as direct\n as possible minimizing copies and hence, increasing performance.\n\n- **Both in-memory and on-disk paradigms are supported the same way**.\n This allows for using the same API for data that can be either\n in-memory or on-disk.\n\n- **Support for a plain buffer data layout**. This allows for\n essentially no-copy data sharing among existing libraries (NumPy),\n allowing to use existing functionality to be used directly in Caterva\n without loosing performance.\n\nAlong this features, there is an important 'mis-feature': Caterva is\n**type- less**. Lacking the notion of data type means that Caterva\ncontainers are not meant to be used in computations directly, but rather\nin combination with other higher-level libraries. While this can be seen\nas a drawback, it actually favors simplicity and leaves up to the user\nthe addition of the types that he is more interested in, which is far\nmore flexible than typed-aware libraries (HDF5, NumPy and many others).\n\nDuring our talk, we will describe all these Caterva features by using\n`cat4py <https://github.com/Blosc/cat4py>`__, a Python wrapper for\nCaterva. Among the points to be discussed would be:\n\n- Introduction to the main features of Caterva.\n\n- Description of the basic data container and its usage.\n\n- Short discussion of different use cases:\n\n- Create and fill high dimensional arrays.\n\n- Get multi-dimensional slices out of the arrays.\n- How different compression codecs and filters in the pipeline affect\n store/retrieval performance.\n\nWe have been using Caterva in one of our internal projects for several\nmonths now, and we are pretty happy with the flexibility and easy-of-use\nthat it brings to us. This is why we decided to open-source it in the\nhope that it would benefit others, but also that others may help us in\ndeveloping it further ;-)\n\nAbout C-Blosc and C-Blosc2\n--------------------------\n\n`C-Blosc <https://github.com/Blosc/c-blosc>`__ is a high performance\ncompressor optimized for binary data. It has been designed to transmit\ndata to the processor cache faster than the traditional, non-compressed,\ndirect memory fetch approach via a memcpy() OS call. Blosc is the first\ncompressor (that we are aware of) that is meant not only to reduce the\nsize of large datasets on- disk or in-memory, but also to accelerate\nmemory-bound computations.\n\n`C-Blosc2 <https://github.com/Blosc/c-blosc2>`__ is the new major\nversion of C-Blosc, with a revamped API and support for new compressors\nand new filters (data transformations), including filter pipelining,\nthat is, the capability to apply different filters during the\ncompression pipeline, allowing for more adaptability to the data to be\ncompressed. Dictionaries are also introduced, allowing better handling\nof redundancies among independent blocks and generally increasing\ncompression ratio and performance. Last but not least, there are new\ndata containers that are meant to overcome the 32-bit limitation of the\noriginal C-Blosc. Furthermore, the new data containers are available in\nvarious formats, including in-memory and on-disk implementations.\n\nCaterva is a library on top of the Blosc2 compressor that implements a\nsimple multidimensional container for compressed binary data. It adds\nthe capability to store, extract, and transform data in these\ncontainers, either in-memory or on-disk.\n",
"duration": 1600,
"language": "eng",
"published_at": "2019-10-27T17:07:02.000Z",
"recorded": "2019-09-04",
"speakers": [
"Francesc Alted"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "Synthetic data is useful in many contexts, including\n\n- providing \"safe\", non-private alternatives to data containing\n personally identifiable information\n- software and pipeline testing\n- software and service development\n- enhancing datasets for machine learning.\n\nSynthetic data is often created on a bespoke basis, and since the advent\nof generative adverserial networks (GANs) there has been considerable\ninterest and experimentation with using those as the basis for creating\nsynthetic data.\n\nWe have taken a different approach. We have worked for some years on\ndeveloping methods for automatically finding constraints that\ncharacterise data, and which can be used for testing data validity\n(so-called \"test-driven data analysis\", TDDA). Such constraints form (by\ndesign) a useful characterisation of the data from which they were\ngenerated. As a result, methods that generate datasets that match the\nconstraints necessarily construct datasets that match many of the\noriginal characteristics of the data from which the constraints were\nextracted.\n\nAn important aspect of datasets is the relationship between \"good\" (~\nvalid) and \"bad\" (~ invalid) data, both of which are typically present.\nSystems for creating useful, realistic synthetic data generally need to\nbe able to synthesize both kinds, in realistic mixtures.\n\nThis talk will discuss data synthesis from constraints, describing what\nhas been achieved so far (which includes synthesizing good and bad data)\nand future research directions.\n\nWe introduce a method for creating synthetic data \"to order\" based on\nlearned (or provided) constraints and data classifications. This\nincludes \"good\" and \"bad\" data.\n",
"duration": 1482,
"language": "eng",
"published_at": "2019-12-01T09:58:56.000Z",
"recorded": "2019-09-04",
"speakers": [
"Nick Radcliffe"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "For instance, when predicting the salary to offer given the descriptions\nof professional experience, the risk is to capture indirectly a gender\nbias present in the distribution of salaries. Another example is found\nin biomedical applications, where for an automated radiology diagnostic\nsystem to be useful, it should use more than socio-demographic\ninformation to build its prediction.\n\nHere I will talk about confounds in predictive models. I will review\nclassic deconfounding techniques developed in a well-established\nstatistical literature, and how they can be adapted to predictive\nmodeling settings. Departing from deconfounding, I will introduce a\nnon-parametric approach \u2013that we named \u201cconfound-isolating\ncross-validation\u201d\u2013 adapting cross-validation experiments to measure the\nperformance of a model independently of the confounding effect.\n\nThe examples are mentioned in this work are related to the common issues\nin neuroimage analysis, although the approach is not limited to\nneuroscience and can be useful in another domains.\n\nConfounding effects are often present in observational data: the effect\nor association studied is observed jointly with other effects that are\nnot desired.\n",
"duration": 889,
"language": "eng",
"published_at": "2019-12-01T13:49:34.000Z",
"recorded": "2019-09-04",
"speakers": [
"Darya Chyzhyk"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "| Sharing the result of a Jupyter notebook is currently not an easy\n path. With voila we are changing this. Voila is a small but important\n ingredient in the Jupyter ecosystem. Voila can execute notebooks,\n keeping the kernel connected but does not allow for arbitrary code\n execution, making it safe to share your notebooks with others.\n| With new libraries built on top of Jupyter widgets/ipywidgets\n (ipymaterialui and ipyvuetify) we allow beautiful modern React and Vue\n components to enter the Jupyter notebook. Using voila we can integrate\n the ipywidgets seamlessly into modern React and Vue pages, to build\n modern dashboards directly from a Jupyter notebook.\n| I will give a live example on how to transform a Jupyter notebook into\n a fully functional single page application with a modern (Material\n Design) look.\n\nTurn your Jupyter notebook into a beautiful modern React or Vue based\ndashboard using voila and Jupyter widgets.\n",
"duration": 1827,
"language": "eng",
"published_at": "2019-12-02T13:26:11.000Z",
"recorded": "2019-09-04",
"speakers": [
"Maarten Breddels",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"description": "In this presentation, we demonstrate how xtensor can be used to\nimplement numerical methods very efficiently in C++, with a high-level\nnumpy-style API, and expose it to Python, Julia, and R for free. The\nresulting native extension operates in-place on Python, Julia, and R\ninfrastructures without overhead.\n\nWe then dive into the xframe package, a dataframe project for the C++\nprogramming language, exposing an API very similar to Python's xarray.\n\nFeatures of xtensor and xframe will be demonstrated using the xeus-cling\njupyter kernel, enabling interactive use of the C++ programming language\nin the notebook.\n\nThe main scientific computing programming languages have different\nmodels the main data structures of data science such as dataframes and\nn-d arrays. In this talk, we present our approach to reconcile the data\nscience tooling in this polyglot world.\n",
"duration": 1583,
"language": "eng",
"published_at": "2020-03-06T16:20:25.000Z",
"recorded": "2019-09-05",
"speakers": [
"Sylvain Corlay",
Expand Down
Loading

0 comments on commit a0e666d

Please sign in to comment.