Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
10M - 100M
License:
text
stringlengths 0
2.2M
|
---|
// bdlma_heapbypassallocator.cpp -*-C++-*-
|
// ----------------------------------------------------------------------------
|
// NOTICE
|
//
|
// This component is not up to date with current BDE coding standards, and
|
// should not be used as an example for new development.
|
// ----------------------------------------------------------------------------
|
#include <bdlma_heapbypassallocator.h>
|
#include <bsls_ident.h>
|
BSLS_IDENT_RCSID(bdlma_heapbypassallocator_cpp,"$Id$ $CSID$")
|
#include <bsls_assert.h>
|
#include <bsls_alignmentutil.h>
|
#include <bsls_platform.h>
|
#include <bsl_algorithm.h>
|
#ifdef BSLS_PLATFORM_OS_WINDOWS
|
#include <windows.h>
|
#elif defined(BSLS_PLATFORM_OS_UNIX)
|
#include <unistd.h>
|
#include <sys/mman.h>
|
#endif
|
namespace BloombergLP {
|
namespace bdlma {
|
// ========================================
|
// struct HeapBypassAllocator::BufferHeader
|
// ========================================
|
struct HeapBypassAllocator::BufferHeader {
|
// This struct defines a link in a linked list of buffers allocated by
|
// 'class HeapBypassAllocator'. Each buffer header is located at the
|
// beginning of the buffer it describes, and contains the size (in bytes)
|
// of the buffer.
|
BufferHeader *d_nextBuffer; // pointer to linked list of buffers
|
// allocated after this one
|
bsls::Types::size_type d_size; // size (in bytes) of this buffer
|
};
|
} // close package namespace
|
// --------------------------
|
// bdlma::HeapBypassAllocator
|
// --------------------------
|
// PRIVATE CLASS METHODS
|
#if defined(BSLS_PLATFORM_OS_UNIX)
|
namespace bdlma {
|
char *HeapBypassAllocator::map(bsls::Types::size_type size)
|
{
|
// Note that passing 'MAP_ANONYMOUS' and a null file descriptor tells
|
// 'mmap' to use a special system file to map to.
|
char *address = (char *)mmap(0, // 'mmap' chooses what address to which
|
// to map the memory
|
size,
|
PROT_READ | PROT_WRITE,
|
#ifdef BSLS_PLATFORM_OS_DARWIN
|
MAP_ANON | MAP_PRIVATE,
|
#else
|
MAP_ANONYMOUS | MAP_PRIVATE,
|
#endif
|
-1, // null file descriptor
|
0);
|
return (MAP_FAILED == address ? 0 : address);
|
}
|
void HeapBypassAllocator::unmap(void *address, bsls::Types::size_type size) {
|
// On some platforms, munmap takes a 'char *', on others, a 'void *'.
|
munmap((char *)address, size);
|
}
|
} // close package namespace
|
#elif defined(BSLS_PLATFORM_OS_WINDOWS)
|
namespace bdlma {
|
char *HeapBypassAllocator::map(bsls::Types::size_type size)
|
{
|
char *address =
|
(char *)VirtualAlloc(0, // 'VirtualAlloc' chooses what address to
|
// which to map the memory
|
size,
|
MEM_COMMIT | MEM_RESERVE,
|
PAGE_READWRITE);
|
return NULL == address ? 0 : address;
|
}
|
End of preview. Expand
in Dataset Viewer.
Dataset Card for "SourceCode"
Dataset Summary
Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages. This dataset can be used in different NLP tasks like language modeling and text generation tasks.
data source:
- PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
- JAVA_CODE: https://github.com/akullpp/awesome-java
- CPP_CODE: https://github.com/fffaraz/awesome-cpp
Supported Tasks and Leaderboards
- language modeling
- code generation tasks, Leaderboard: code-autocomplete
Languages
- programming languages: Python, Java, C++
- natural language: English
Dataset Structure
Data Instances
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"text": """
import json
import argparse
def _parse_args():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--model-file',
required=True,
help=(
'A pt file from '
'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
)
)
return parser.parse_args()
"""
}
Data Fields
The data fields are the same among all splits.
text
: astring
feature.
Data Splits
python
$ wc -l python/*
10000 python/test.txt
5215412 python/train.txt
10000 python/valid.txt
5235412 total
java
$ wc -l java/*
950083 java/test.txt
2802880 java/train.txt
940803 java/valid.txt
4693766 total
cpp
$ wc -l cpp/*
1060014 cpp/test.txt
3119241 cpp/train.txt
1099124 cpp/valid.txt
5278379 total
Dataset Creation
Curation Rationale
As code generation dataset, I upload it to huggingface datasets.
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Citation:
APA:
Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
BibTeX:
@software{Xu_code-autocomplete_Code_AutoComplete,
author = {Xu, Ming},
title = {code-autocomplete: Code AutoComplete with GPT2 model},
url = {https://github.com/shibing624/code-autocomplete},
version = {0.0.4}
}
Annotations
Annotation process
Who are the annotators?
nobody
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
This dataset was developed as a benchmark for evaluating code generation model.
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Github awesome programing code repos.
Licensing Information
GNU Free Documentation License v1.3 or later.
For research use only.
Contributions
Thanks to @shibing624 add this dataset.
- Downloads last month
- 83