Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

Webcam capture not working on win 8.1

$
0
0

Hi,

I recently updated my laptop from win8 to win8.1 and external webcams from Logitech and Creative are not working. My application uses Media Foundation for the capture process and everything was fine in win7 and win8 but now, no samples are delivered...

I tested SDK Samples MFCaptureToFile and MFCaptureD3D and same problem arises.

I read something in stackoverflow about a variable not being initialized by webcam drivers, FrameCompletionNumber inside KS_FRAME_INFO structure. As I understood it depends on manufacturers to fix this issue. Is that right?

Regards,

Sergio



Possible Buffer Underrun Issues Causing Noisy Playback

$
0
0

I am trying to make a sequence synthesizer and feel like I am so close to the finish line but I am having a weird issue. When there are no oscillators feeding the buffer its silent, like it should be. But once even one oscillator starts feeding the buffer there is a lot of noise that is also played out. I am testing this in exclusive event driven rendering so I believe I am filling up the buffer properly but perhaps it takes too long to fill the buffer and that is creating underrun?

hr = _RenderClient->GetBuffer(_BufferSizePerPeriod, &data);
		if (SUCCEEDED(hr))
		{

			soundOutputPtr->audioOut((float *) data, _BufferSizePerPeriod, channels, 0, 0);

			hr = _RenderClient->ReleaseBuffer(_BufferSizePerPeriod, 0);
		}

Here is the code that passes the buffer to get filled

for (int i = 0; i < bufferSize/nChannels; i++){

		double sample = 0;
		int it = waves.size();
		for (int j = 0; j < it; j++){
			float val = waves[j].getSample();
			soundBuffers[j][i] = val;
			sample += val;
		}

		output[i*nChannels    ] = sample;
		output[i*nChannels + 1] = sample;

		soundBuffer[i] = sample;
	}

and this fills the buffer.

waves is a vector filled with oscillators of various types, sine, sawtooth etc...

float oscillator::getSample(){
    phase += phaseAdder;
    while (phase > TWO_PI) phase -= TWO_PI;
    if (type == sineWave){
        return sin(phase) * volume;
    } else if (type == squareWave){
        return (sin(phase) > 0 ? 1 : -1) * volume;
    } else if (type == triangleWave){
        float pct = phase / TWO_PI;
        return ( pct < 0.5 ? ofMap(pct, 0, 0.5, -1, 1) : ofMap(pct, 0.5, 1.0, 1, -1)) * volume;
    } else if (type == sawWave){
        float pct = phase / TWO_PI;
        return ofMap(pct, 0, 1, -1, 1) * volume;
    } else if (type == sawWaveReverse){
        float pct = phase / TWO_PI;
        return ofMap(pct, 0, 1, 1, -1) * volume;
    }
}

I can't really find anything as to why there would be a lot of white noise that is being played out so underrun is my only guess. has anyone else experienced this?

H264 Full HD Encoding

$
0
0

I have a following setup:

IMFSinkWriter that outputs to H264/AAC using MF encoders. I feed the sink writer with RGB24 or IYUV frames. I'm trying with 2 types of sink: File Sink and Multicast.

Everything is working fine when I feed the encoder 720p and set 720p as output. The problems appear when I try 1080i@59.94 or 1080i@50. I feed it 1920x1080 frames on input (RGB24 or IYUV), but the encoder cannot keep up.

On a i7930 machine, with 6GB of RAM, it only reaches below 20 Frames Per Second. The number of input samples is 2 times bigger then the number of processed samples and this rises over time (on the side of input samples). Also, the memory is sky rocketing.

My question is: did anybody successfully used MF H264 Encoder for real Time encoding for Full HD (1080i@59.94 for ex.)?

My tests show that this isn't achievable?!

Maybe there are some settings in ICodecAPI that I could try out?

Media foundation playlist

$
0
0

I am having trouble finding a sample program to play a playlist under Media Foundation. I see a comment that an example was removed 'in favor' of another interface, now deprecated. No one seems to have bothered to put a non-deprecated example out there, I have to download and cut a CD to find an example. The topic seems old, dead links and so on.

So, can someone point me to a code sample?

Ed


ert304

how to write a network source?

$
0
0
 I want to write a network source that can implement other network protocol, but I do not know how to implement it, Can anyone give me some suggestion?

Media Foundation will not release a file after MFShutdown is called

$
0
0

We are using the now deprecated IMFPMediaPlayer for a small media player that is embedded in a larger Java Desktop Application.  We use the IMFPMediaPlayer::CreateMediaItemFromURL with a call back object.  Everything related to Media Foundation is queued into a single thread with a message pump and play, pause, sink, etc works fine. However, after MFShutdown, the Java Process can not delete the file that was created to play because the file is still held by the MFPlay.dll process (looked through with System Process Explorer).  The fact that MFPlay is still around bugs me.

I put in logging to make sure that all of our Objects have the appropriate Release() (and that the Ref Count is 1) on shutdown.  MFShutdown returns a S_OK.  

I originally wrote this so that MFStartup and MFShutdown where not called by our C++ code, but once this delete bug surfaced, I tried forcing the shutdown.

Basically, the Java App downloads a zip file full of media, extracts the media to the temp folder, then plays the media as needed - once the media is played, it should be deleted.  Everything but the delete after MFShutdown.

Thanks in advance.

Using IMFSinkWriter and IMFSourceReader, how to store custom timestamp or tag with each frame as metadata?

$
0
0

Hello,

I need to store a custom timestamp per frame when encoding frames using IMFSinkWriter, to be later read in as attributes from the sample using IMFSourceReader. This is using the H264 encoder and Windows 8.

If not possible is there a sample ID, or any unique value that exists during encoding that can be read in later when decoding?

If this isn't possible, seems like an obvious way to extend the API in Windows 9, after all APIs is the reason why Windows still matters.

Thanks.

Make SinkWriter Encode, yet not write output to File/ByteStream.

$
0
0

I have my end to end scenario completed, creating IMFSamples from Raw YUV and encoding into mp4 or ByteStream.

However, for my purposes I am trying to isolate the Cycles spent on the SinkWriter doing the encoding and the Cycles spend writing output to MediaSink.

I want a scenario that purely involves encoding and no any IO operations to MediaSink.


Multiple videos on multiple displays synced

$
0
0

Hi,

I would like to play multiple video streams on multiple displays (screens, monitors) in fullscreen modes synced.All video streams must starts absolutely in same time. Which component/technique from Media Foundation I should use for achieve that?

In other words:

I have 3 videos: for example video1.avi, video2.avi, video3.avi and connected 3 displays to the computer. When I call some function, all videos will preloads and starts playing in EXACTLY same time on different screens in fullscreen modes (video1.avi on screen 1, video2.avi on screen 2, ...).

And my second question - it is possible to achieve this functionality without using Media Foundation by some .NET component / library (I'm C# programmer, so I rather use C# .NET than C++ and Media Foundation)

Thanks for help

DX11 Video Renderer sample code is incomplete.

$
0
0
Your ref : http://code.msdn.microsoft.com/windowsdesktop/DirectX-11-Video-Renderer-0e749100

Dear Sirs,

    So ... that project creates a DLL, which isn't a standard EVR so I have no clue how to fire it up.

Feeback states that the only way to test it is to use a closed source version of topedit in the Windows 8 SDK.

That isn't very helpful as that doesn't demonstrate how to use the thing.

Please provide sample code - the simpler the better that demonstrates how to use this DirectX11 Video Renderer or a derivative to throw a video texture on some simple geometry on the screen.

As a follow up, please demonstrate multiple video textures playing simultaneously to demonstrate the API supports this feature. If it doesn't, please add this support :)

Sorry to give you a hard time but I need a solid video API and if Windows doesn't provide one, it's time to look to other operating systems for a robust solution.

Regards,
Steve.

How to properly shutdown media session when micophone is unplugged ?

$
0
0

I modified the transcode sample to record microphone to wmv file. The record stopped when I unplugged the mic.

However, the recorded file show that the length of the media is 0:00:00 (file size looks alright though).

This is due to  hr = pEvent->GetStatus(&hrStatus); giving hrStatus of 0xc00d4e86, causing the session to auto-terminate.

Thus the output file did not close and finalise properly. Is there a way to solve this ?

HRESULT CTranscoder::Transcode()
{
    assert (m_pSession);
    IMFMediaEvent* pEvent = NULL;
    MediaEventType meType = MEUnknown;  // Event type

    HRESULT hr = S_OK;
    HRESULT hrStatus = S_OK;            // Event status

    //Get media session events synchronously
    while (meType != MESessionClosed)
    {
        hr = m_pSession->GetEvent(0, &pEvent);

        if (FAILED(hr)) { break; }

        // Get the event type.
        hr = pEvent->GetType(&meType);
        if (FAILED(hr)) { break; }

        hr = pEvent->GetStatus(&hrStatus);
        if (FAILED(hr)) { break; }

        if (FAILED(hrStatus))
        {
            wprintf_s(L"Failed. 0x%X error condition triggered this event.\n", hrStatus);
            hr = hrStatus;
            break;
        }

        switch (meType)
        {
        case MESessionTopologySet:
            hr = Start();
            if (SUCCEEDED(hr))
            {
                wprintf_s(L"Ready to start.\n");
            }
            break;

        case MESessionStarted:
            wprintf_s(L"Started encoding...\n");
            break;

        case MESessionEnded:
            hr = m_pSession->Close();
            if (SUCCEEDED(hr))
            {
                wprintf_s(L"Finished encoding.\n");
            }
            break;

        case MESessionClosed:
            wprintf_s(L"Output file created.\n");
            break;
        }

        if (FAILED(hr))
        {
            break;
        }

        SafeRelease(&pEvent);
    }

    SafeRelease(&pEvent);
    return hr;
}

IMFActivate::ActivateObject memory leak?

$
0
0

Hi,

I think that there is a memory leak when I execute IMFActivate::ActivateObject.

I found that when method IMFActivate::ActivateObject is executed, the inner COM counter is increased on 1, but when I execute IMFActivate::DetachObject() it is not decreased. I wrote simple test for this. When function CreateVideoDeviceSource() is called with argument activateIMFMediaSource = false method IMFActivate::ActivateObject is not called and Release of IMFActive return 0 - full release and it is OK.


void CreateVideoDeviceSource(bool activateIMFMediaSource)
{

    IMFMediaSource *pSource = NULL;
    IMFAttributes *pAttributes = NULL;
    IMFActivate **ppDevices = NULL;

    HRESULT hr = MFCreateAttributes(&pAttributes, 1);
    if (FAILED(hr))
    {
        goto done;
    }

   
    hr = pAttributes->SetGUID(
        MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE, 
        MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID
        );
    if (FAILED(hr))
    {
        goto done;
    }

  
    UINT32 count;
    hr = MFEnumDeviceSources(pAttributes, &ppDevices, &count);
    if (FAILED(hr))
    {
        goto done;
    }

    if (count == 0)
    {
        hr = E_FAIL;
        goto done;
    }

	if(activateIMFMediaSource)
	{
		for(UINT32 index = 0; index < count; index++)
		{

			ULONG eSource = -1;
   
			hr = ppDevices[index]->ActivateObject(IID_PPV_ARGS(&pSource));

			if (FAILED(hr))
			{
				goto done;
			}

			ppDevices[index]->DetachObject();

			pSource->Shutdown();

			eSource = pSource->Release();

			eSource = -1;

		}
	}



done:

	ULONG eDevice = -1;

    for (UINT32 index = 0; index < count; index++)
    {
		eDevice = ppDevices[index]->Release();
    }
    CoTaskMemFree(ppDevices);

	ULONG epAttributes = -1;

	epAttributes = pAttributes->Release();

	epAttributes = -1;

    return;
}

However, when I state argument activateIMFMediaSource = true IMFActive::Release() return 1.

	ULONG eDevice = -1;

    for (UINT32 index = 0; index < count; index++)
    {
		eDevice = ppDevices[index]->Release();
    }

in funtion it calls IMFActivate::ActivateObject and activate IMFMediaSource. I read about using IMFMediaSource and in function it calls IMFActivate::DetachObject(), IMFMediaSource::Shutdown() and IMFMediaSource::Release().

ULONG eSource = -1;
hr = ppDevices[index]->ActivateObject(IID_PPV_ARGS(&pSource));
if (FAILED(hr))
{
   goto done;
}

ppDevices[index]->DetachObject();

pSource->Shutdown();

eSource = pSource->Release();


Ans it is OK - eSource has 0 after calling IMFMediaSource::Release(). But inner COM counter IMFActivate is rised on 1 - and I don't know how to resolve it. I tried to call IMFActivate::Release() two times, but it leads to memory leak. I wrote simple programm - when I call this program with argument function CreateVideoDeviceSource() - true I see on yhe task manager that the size of the memory of progremm rise. When it is called with argument - false - the size of memory is constant.

int _tmain(int argc, _TCHAR* argv[])
{
	HRESULT hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED);

	if (FAILED(hr))
	{
		goto finish;
	}

	hr = MFStartup(MF_VERSION);
	
	if (FAILED(hr))
	{
		goto finish;
	}

	for(int index = 0; index < 100; index++)
	{

		CreateVideoDeviceSource(true);

		Sleep(500);
	}

finish:

	MFShutdown();

	::CoUninitialize();

	return 0;
}

I have spent much time on searching info on MSDN, but did not find any resolvig this problem. Please, help with this situation.


IMFSinkWriter very slow and use of MF_SINK_WRITER_DISABLE_THROTTLING

$
0
0

I have a test app using MPEG4 media sink and IMFSinkWriter. I am writing RGB32 samples. The sink writer appears to be incredibly slow.

Running the app the I can only process about 3 frames per second throught the WriteSample method and generating a 10s second video takes several minutes.

If I set MF_SINK_WRITER_DISABLE_THROTTLING to true the output file is generated very quickly - about 10 seconds. With a large output file WriteSample always returns S_OK but eventually I get E_OUTOFMEMORY from MFCreateSample which is fair enough.

My question is:

With a small output (250 frames) why is the SinkWriter so slow when throttling is enabled?

I would have expected that with throttling on or off it should take exactly the same amount of time to produce the output file.

Source code link broken : Media Foundation / MEPG1Source

$
0
0
Hi guys,

Since you've migrated over to http://code.msdn.microsoft.com/

... the link to the source for MEPG1Source has broken. Please fix.

http://msdn.microsoft.com/en-gb/library/windows/desktop/bb970518%28v=vs.85%29.aspx

Will grab it from latest SDK but online source is usually more up to date & would like to work from your latest version.

Thanks in advance for fixing this up,

Steve.

Media Foundation Applications Don't run under Windows Server 2012R2

$
0
0

Hello,

I'm trying to run a Media Foundation video player under a Windows Server 2012R2 Update 1. I started the application and it said that the file can't be player back (wmv v9). 

At this point I decided to tryout a MS Sample application (Basic Playback) and it doesn't run either. Can somebody tell me what I am doing wrong?

The application runs under Windows Server 2008R2 with the Desktop Experience feature installed..

I have also installed following features under Win Serv2012R2:

1. Ink and Handwriting Services

2. Media Foundation

3. User Interfaces and Infrastructure

- Desktop Experience

- Everything else is also installed

I have to mention that Windows Media Player is able to play the file.

The MF player application is using the media session for playback.

Thank you.


Adding the same topology several times in the sequencer source

$
0
0

Hello,

I have some trouble finding how to make the following :

I have one video file and 5 audio files (mp4 and mp3) that must play at the same time, and I need to be able to select a section of  the full length (for instance from 5 sec to 10 sec) and make it loop seamlessly.

I have tried the following, but with no success :

- By using one topology, restarting the playback on the end of presentation event : it works but only for the whole file.  If I use the MF_TOPONODE_MEDIASTART/STOP, then it does no more loop.

- I try to add 4 times the same unique topology (the one that contains all the nodes for playing the files together).  Here I can use the MF_TOPONODE_MEDIASTART/STOP, but it only plays it once, then the session ends.  I have well placed the SequencerTopologyFlags_Last only for the last AppendTopology.  I receive the various MENewPresentation and MF_TOPOSTATUS_SINK_SWITCHED, but it only is played once.  Is it possible to add the same topology several times ?

Do we need to create separate topologies built totally independently ?   Or in this case, would it be enough to create 2 topologies and use them alternatively ?

I would appreciate some advice on that.

Dominique

The max resolution for mp4(h264) encoder

$
0
0

hi guys

I want to know what is the max resolution for mp4(h264) encoder in media foundation. I can't use sinkwriter to encoder a 4k resolution mp4 file.

Thanks.

IMFSinkWriter bogs down after about 10800 frames

$
0
0

Hiya folks,

I'm creating audio and video and sending it to an IMFSinkWriter to output as an H.264/AAC mp4 movie.  I've noticed that after about10800 frames of 640 x 480 resolution video with audio the sink writer basically stops responding.. it becomes so slow that the program is unusable and eventually it crashes.  

I thought I fixed the issue by calling IMFSinkWriter->Flush() about every second.. this kept the performance peppy, but then I noticed that the resulting movie was skipping.  I wasn't too surprised as it is documented in the Flush() API that when calling Flush, all pending samples are dropped.  Ok, fine.

So then I tried calling IMFSinkWriter->NotifyEndOfSegment() about every second.  This also kept the performance peppy and didn't seem to drop frames.  However, I noticed that it was causing my audio track and video track to get out of sink... specifically, after awhile, the video starts advancing faster than the audio (picture a slideshow, here, where the audio track is correct but the pictures are jumping ahead of their audio).  It's like the duration of the video frames is messed up.

I thought perhaps that somehow queued samples were still being dropped so I tried using IMSinkWriter->GetStatistics() to watch for pending samples and wait for them before proceeding, but this hasn't fixed my problem.  I'm left with two choices: call Notify() and screw up my synchronization, or don't call Notify() and bog down the sink writer.  Either choice is unacceptable.. I must be doing something wrong.  Can someone please point me in the right direction?  I must be missing something basic and significant in my use of the IMFSinkWriter.

Trouble setting the H.264 Encoder's Max Key Frame Spacing.

$
0
0

Hi

I am currently trying to change the max key frame interval of the media foundation H.264 encoder by setting the MF_MT_MAX_KEYFRAME_SPACING on the output media type of the encoder. But every video that I create seems to always default to a key frame interval of 2 seconds.

My current scenario entails setting up a SinkWriter with an uncompressed YUY2 input media type and a H.264 output media type.

The video source is 25fps so for a 1 second key frame interval I set the MF_MT_MAX_KEYFRAME_SPACING attribute on the H.264 output media type to 25 frames. But the H.264 encoder still outputs the key frames every 2 seconds(50 frames).

I also tried setting it to a higher 250 frames(10 second) interval and with the same result.

Am I missing some setting somewhere or is the max key frame interval not configurable on the media foundation H.264 encoder?

I have included two trace statements from my tests with the SinkWriter below:

1. The Media Foundation H.264 encoder being created.
7784,1CD4 12:44:33.39787 COle32ExportDetours::CoCreateInstance @ Created {6CA50344-051A-4DED-9779-A43305165E35}  (C:\Windows\SysWOW64\mfh264enc.dll) @00A5B51C - traced interfaces: IMFTransform @00A5B51C,

2. The output media format set on the encoder with the MF_MT_MAX_KEYFRAME_SPACING=25 attribute.
7784,1CD4 12:44:33.41075 CMFTransformDetours::SetOutputType @00A5B51C Succeeded MT: MF_MT_FRAME_SIZE=3092376453696 (720,576);MF_MT_AVG_BITRATE=3500000;MF_MT_MPEG_SEQUENCE_HEADER=00 00 00 01 67 42 c0 1e 95 b0 2d 04 9b 01 10 00 00 03 00 10 00 00 03 03 21 da 08 84 6e 00 00 00 01 68 ca 8f 20 ;MF_MT_MAJOR_TYPE=MEDIATYPE_Video;MF_MT_MPEG2_PROFILE=66;MF_MT_MAX_KEYFRAME_SPACING=25;MF_MT_FRAME_RATE=107374182401 (25,1);MF_MT_PIXEL_ASPECT_RATIO=4294967297 (1,1);MF_MT_INTERLACE_MODE=7;MF_MT_SUBTYPE=MEDIASUBTYPE_H264

MFDub sample sources are not available anymore

$
0
0
The sample link for MFDub ( http://blogs.msdn.com/b/mf/archive/2010/03/12/mfdub.aspx ) don't work. Where can I get the sample sources, even if they were out of date (in that case, an updated but similar sample is better option, of course) ?
Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>